Dialogue on Appeals to Consequences
post by jessicata (jessica.liu.taylor) · 2019-07-18T02:34:52.497Z · LW · GW · 87 commentsThis is a link post for https://unstableontology.com/2019/07/18/dialogue-on-appeals-to-consequences/
Contents
87 comments
[note: the following is essentially an expanded version of this LessWrong comment [LW(p) · GW(p)] on whether appeals to consequences are normative in discourse. I am exasperated that this is even up for debate, but I figure that making the argumentation here explicit is helpful]
Carter and Quinn are discussing charitable matters in the town square, with a few onlookers.
Carter: "So, this local charity, People Against Drowning Puppies (PADP), is nominally opposed to drowning puppies."
Quinn: "Of course."
Carter: "And they said they'd saved 2170 puppies last year, whereas their total spending was $1.2 million, so they estimate they save one puppy per $553."
Quinn: "Sounds about right."
Carter: "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies."
Quinn: "Hold it right there. Regardless of whether that's true, it's bad to say that."
Carter: "That's an appeal to consequences, well-known to be a logical fallacy."
Quinn: "Is that really a fallacy, though? If saying something has bad consequences, isn't it normative not to say it?"
Carter: "Well, for my own personal decisionmaking, I'm broadly a consequentialist, so, yes."
Quinn: "Well, it follows that appeals to consequences are valid."
Carter: "It isn't logically valid. If saying something has bad consequences, that doesn't make it false."
Quinn: "But it is decision-theoretically compelling, right?"
Carter: "In theory, if it could be proven, yes. But, you haven't offered any proof, just a statement that it's bad."
Quinn: "Okay, let's discuss that. My argument is: PADP is a good charity. Therefore, they should be getting more donations. Saying that they didn't save as many puppies as they claimed they did, in public (as you just did), is going to result in them getting fewer donations. Therefore, your saying that they didn't save as many puppies as they claimed to is bad, and is causing more puppies to drown."
Carter: "While I could spend more effort to refute that argument, I'll initially note that you only took into account a single effect (people donating less to PADP) and neglected other effects (such as people having more accurate beliefs about how charities work)."
Quinn: "Still, you have to admit that my case is plausible, and that some onlookers are convinced."
Carter: "Yes, it's plausible, in that I don't have a full refutation, and my models have a lot of uncertainty. This gets into some complicated decision theory and sociological modeling. I'm afraid we've gotten sidetracked from the relatively clear conversation, about how many puppies PADP saved, to a relatively unclear one, about the decision theory of making actual charity effectiveness clear to the public."
Quinn: "Well, sure, we're into the weeds now, but this is important! If it's actually bad to say what you said, it's important that this is widely recognized, so that we can have fewer... mistakes like that."
Carter: "That's correct, but I feel like I might be getting trolled. Anyway, I think you're shooting the messenger: when I started criticizing PADP, you turned around and made the criticism about me saying that, directing attention against PADP's possible fraudulent activity."
Quinn: "You still haven't refuted my argument. If you don't do so, I win by default."
Carter: "I'd really rather that we just outlaw appeals to consequences, but, fine, as long as we're here, I'm going to do this, and it'll be a learning experience for everyone involved. First, you said that PADP is a good charity. Why do you think this?"
Quinn: "Well, I know the people there and they seem nice and hardworking."
Carter: "But, they said they saved over 2000 puppies last year, when they actually only saved 138, indicating some important dishonesty and ineffectiveness going on."
Quinn: "Allegedly, according to your calculations. Anyway, saying that is bad, as I've already argued."
Carter: "Hold up! We're in the middle of evaluating your argument that saying that is bad! You can't use the conclusion of this argument in the course of proving it! That's circular reasoning!"
Quinn: "Fine. Let's try something else. You said they're being dishonest. But, I know them, and they wouldn't tell a lie, consciously, although it's possible that they might have some motivated reasoning, which is totally different. It's really uncivil to call them dishonest like that. If everyone did that with the willingness you had to do so, that would lead to an all-out rhetorical war..."
Carter: "God damn it. You're making another appeal to consequences."
Quinn: "Yes, because I think appeals to consequences are normative."
Carter: "Look, at the start of this conversation, your argument was that saying PADP only saved 138 puppies is bad."
Quinn: "Yes."
Carter: "And now you're in the course of arguing that it's bad."
Quinn: "Yes."
Carter: "Whether it's bad is a matter of fact."
Quinn: "Yes."
Carter: "So we have to be trying to get the right answer, when we're determining whether it's bad."
Quinn: "Yes."
Carter: "And, while appeals to consequences may be decision theoretically compelling, they don't directly bear on the facts."
Quinn: "Yes."
Carter: "So we shouldn't have appeals to consequences in conversations about whether the consequences of saying something is bad."
Quinn: "Why not?"
Carter: "Because we're trying to get to the truth."
Quinn: "But aren't we also trying to avoid all-out rhetorical wars, and puppies drowning?"
Carter: "If we want to do those things, we have to do them by getting to the truth."
Quinn: "The truth, according to your opinion-"
Carter: "God damn it, you just keep trolling me, so we never get to discuss the actual facts. God damn it. Fuck you."
Quinn: "Now you're just spouting insults. That's really irresponsible, given that I just accused you of doing something bad, and causing more puppies to drown."
Carter: "You just keep controlling the conversation by OODA looping faster than me, though. I can't refute your argument, because you appeal to consequences again in the middle of the refutation. And then we go another step down the ladder, and never get to the truth."
Quinn: "So what do you expect me to do? Let you insult well-reputed animal welfare workers by calling them dishonest?"
Carter: "Yes! I'm modeling the PADP situation using decision-theoretic models, which require me to represent the knowledge states and optimization pressures exerted by different agents (both conscious and unconscious), including when these optimization pressures are towards deception, and even when this deception is unconscious!"
Quinn: "Sounds like a bunch of nerd talk. Can you speak more plainly?"
Carter: "I'm modeling the actual facts of how PADP operates and how effective they are, not just how well-liked the people are."
Quinn: "Wow, that's a strawman."
Carter: "Look, how do you think arguments are supposed to work, exactly? Whoever is best at claiming that their opponent's argumentation is evil wins?"
Quinn: "Sure, isn't that the same thing as who's making better arguments?"
Carter: "If we argue by proving our statements are true, we reach the truth, and thereby reach the good. If we argue by proving each other are being evil, we don't reach the truth, nor the good."
Quinn: "In this case, though, we're talking about drowning puppies. Surely, the good in this case is causing fewer puppies to drown, and directing more resources to the people saving them."
Carter: "That's under contention, though! If PADP is lying about how many puppies they're saving, they're making the epistemology of the puppy-saving field worse, leading to fewer puppies being saved. And, they're taking money away from the next-best-looking charity, which is probably more effective if, unlike PADP, they're not lying."
Quinn: "How do you know that, though? How do you know the money wouldn't go to things other than saving drowning puppies if it weren't for PADP?"
Carter: "I don't know that. My guess is that the money might go to other animal welfare charities that claim high cost-effectiveness."
Quinn: "PADP is quite effective, though. Even if your calculations are right, they save about one puppy per $10,000. That's pretty good."
Carter: "That's not even that impressive, but even if their direct work is relatively effective, they're destroying the epistemology of the puppy-saving field by lying. So effectiveness basically caps out there instead of getting better due to better epistemology."
Quinn: "What an exaggeration. There are lots of other charities that have misleading marketing (which is totally not the same thing as lying). PADP isn't singlehandedly destroying anything, except instances of puppies drowning."
Carter: "I'm beginning to think that the difference between us is that I'm anti-lying, whereas you're pro-lying."
Quinn: "Look, I'm only in favor of lying when it has good consequences. That makes me different from pro-lying scoundrels."
Carter: "But you have really sloppy reasoning about whether lying, in fact, has good consequences. Your arguments for doing so, when you lie, are made of Swiss cheese."
Quinn: "Well, I can't deductively prove anything about the real world, so I'm using the most relevant considerations I can."
Carter: "But you're using reasoning processes that systematically protect certain cached facts from updates, and use these cached facts to justify not updating. This was very clear when you used outright circular reasoning, to use the cached fact that denigrating PADP is bad, to justify terminating my argument that it wasn't bad to denigrate them. Also, you said the PADP people were nice and hardworking as a reason I shouldn't accuse them of dishonesty... but, the fact that PADP saved far fewer puppies than they claimed actually casts doubt on those facts, and the relevance of them to PADP's effectiveness. You didn't update when I first told you that fact, you instead started committing rhetorical violence against me."
Quinn: "Hmm. Let me see if I'm getting this right. So, you think I have false cached facts in my mind, such as PADP being a good charity."
Carter: "Correct."
Quinn: "And you think those cached facts tend to protect themselves from being updated."
Carter: "Correct."
Quinn: "And you think they protect themselves from updates by generating bad consequences of making the update, such as fewer people donating to PADP."
Carter: "Correct."
Quinn: "So you want to outlaw appeals to consequences, so facts have to get acknowledged, and these self-reinforcing loops go away."
Carter: "Correct."
Quinn: "That makes sense from your perspective. But, why should I think my beliefs are wrong, and that I have lots of bad self-protecting cached facts?"
Carter: "If everyone were as willing as you to lie, the history books would be full of convenient stories, the newspapers would be parts of the matrix, the schools would be teaching propaganda, and so on. You'd have no reason to trust your own arguments that speaking the truth is bad."
Quinn: "Well, I guess that makes sense. Even though I lie in the name of good values, not everyone agrees on values or beliefs, so they'll lie to promote their own values according to their own beliefs."
Carter: "Exactly. So you should expect that, as a reflection to your lying to the world, the world lies back to you. So your head is full of lies, like the 'PADP is effective and run by good people' one."
Quinn: "Even if that's true, what could I possibly do about it?"
Carter: "You could start by not making appeals to consequences. When someone is arguing that a belief of yours is wrong, listen to the argument at the object level, instead of jumping to the question of whether saying the relevant arguments out loud is a good idea, which is a much harder question."
Quinn: "But how do I prevent actually bad consequences from happening?"
Carter: "If your head is full of lies, you can't really trust ad-hoc object-level arguments against speech, like 'saying PADP didn't save very many puppies is bad because PADP is a good charity'. You can instead think about what discourse norms lead to the truth being revealed, and which lead to it being obscured. We've seen, during this conversation, that appeals to consequences tend to obscure the truth. And so, if we share the goal of reaching the truth together, we can agree not to do those."
Quinn: "That still doesn't answer my question. What about things that are actually bad, like privacy violations?"
Carter: "It does seem plausible that there should be some discourse norms that protect privacy, so that some facts aren't revealed, if such norms have good consequences overall. Perhaps some topics, such as individual people's sex lives, are considered to be banned topics (in at least some spaces), unless the person consents."
Quinn: "Isn't that an appeal to consequences, though?"
Carter: "Not really. Deciding what privacy norms are best requires thinking about consequences. But, once those norms have been decided on, it is no longer necessary to prove that privacy violations are bad during discussions. There's a simple norm to appeal to, which says some things are out of bounds for discussion. And, these exceptions can be made without allowing appeals to consequences in full generality."
Quinn: "Okay, so we still have something like appeals to consequences at the level of norms, but not at the level of individual arguments."
Carter: "Exactly."
Quinn: "Does this mean I have to say a relevant true fact, even if I think it's bad to say it?"
Carter: "No. Those situations happen frequently, and while some radical honesty practitioners try not to suppress any impulse to say something true, this practice is probably a bad idea for a lot of people. So, of course you can evaluate consequences in your head before deciding to say something."
Quinn: "So, in summary: if we're going to have suppression of some facts being said out loud, we should have that through either clear norms designed with consequences (including consequences for epistemology) in mind, or individuals deciding not to say things, but otherwise our norms should be protecting true speech, and outlawing appeals to consequences."
Carter: "Yes, that's exactly right! I'm glad we came to agreement on this."
87 comments
Comments sorted by top scores.
comment by jefftk (jkaufman) · 2019-07-19T11:18:36.964Z · LW(p) · GW(p)
The motivating example for this post is whether you should say "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies", with Quinn arguing that you shouldn't say it because saying it has bad consequences. The problem is, saying this has very clearly good consequences, which means trying to use it as a tool for figuring out what you think of appeals to consequences sets up your intuitions to confuse you.
(It has clearly good consequences because "how much money goes to PADP right now" is far less import than "building a culture of caring about the actual effectiveness of organizations and truly trying to find/make the best ones". Plus if, say, Animal Charity Evaluators trusted this higher number of puppies saved and it had lead them to recommend PADP as I've if their top charities, that that would mean displacing funds that could have gone to more effective animal charities. The whole Effective Altruism project is about trying to figure out how to get the biggest positive impact, and you can't do this if you declare discussing negative information about organizations off limits.)
The post would be a lot clearer if it had a motivating example that really did have bad consequences, all things considered. As a person who's strongly pro transparency is hard for me to come up with cases, but there are still contexts where I think it's probably the case. What if Carter were a researcher who had run a small study on a new infant vaccine and seen elevated autism rates on the experimental group. There's an existing "vaccines cause autism" meme that is both very probably wrong and very probably harmful, which means Carter should be careful about messaging for their results. Good potential outcomes include:
-
Carter's experiment is replicated, confirmed, and the vaccine is not rolled out.
-
Carter's experiment fails to replicate, researchers look into it more, and discover that there was a problem in the initial experiment / in the replication / they need more data / etc
Bad potential outcomes include:
- Headlines that say "scientists finally admit vaccines do cause autism"
Because of the potential harmful consequences of handling this poorly, Carter should be careful about how they talk about their results and to who. Trying to get funding to scale up the experiment, making sure the FDA is aware, letting other researchers know, etc, all are beneficial and have good consequences. Going to the mainstream media with a controversial sell-lots-of-papers story, by contrast, would have predictably bad consequences.
When talking with friends or within your field it's hard to think of cases where you shouldn't just say the interesting thing you've found, while with larger audiences and in less truth-oriented cultures you need to start being more careful.
EDIT: expanded this into https://www.jefftk.com/p/appeals-to-consequences
Replies from: Kaj_Sotala, jessica.liu.taylor↑ comment by Kaj_Sotala · 2019-07-19T11:54:09.447Z · LW(p) · GW(p)
The post would be a lot clearer if it had a motivating example that really did have bad consequences, ask things considered.
The extreme case would be a scientific discovery which enabled anyone to destroy the world, such as the supernova thing in Three Worlds Collide [LW · GW] or the thought experiment that Bostrom discusses in The Vulnerable World Hypothesis:
So let us consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? Szilard becomes gravely concerned. He sees that his discovery must be kept secret at all costs. But how. His insight is bound to occur to others. He could talk to a few of his physicist friends, the ones most likely to stumble upon the idea, and try to persuade them not to publish anything on nuclear chain reactions or on any of the reasoning steps leading up to the dangerous discovery. (That is what Szilard did in actual history.)
[...] Soon, figuring out how to initiate a nuclear chain reaction with pieces of metal, glass, and electricity will no longer take genius but will be within reach of any STEM student with an inventive mindset.
↑ comment by jessicata (jessica.liu.taylor) · 2019-07-19T16:17:28.963Z · LW(p) · GW(p)
Note, I'm not arguing for a positive obligation to always inform everyone (see last few lines of dialogue), it's important for people to use their discernment sometimes.
But, in the case you mentioned, if your study really did find that a vaccine caused autism, by the logic of the dialogue, that casts doubt on the "vaccines don't cause autism and antivaxxers are wrong and harmful" belief. (Maybe you're not the only one who has found that vaccines cause autism, and other researchers are hiding it too). So, you should at least update that belief on the new evidence before evaluating consequences. (It could be that, even after considering this, the new study is likely to be a fluke, and discerning researchers will share the new study in an academic community without going to the press)
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2019-07-19T16:55:50.955Z · LW(p) · GW(p)
My main objection is that the post is built around a case where Quinn is very wrong in their initial "bad consequences" claim, and that this leads people to have misleading intuitions. I was trying to propose an alternative situation where the "bad consequences" claim was true or closer to true, but where Quinn would still be wrong to suggest Carter shouldn't describe what they'd found.
(Also, for what it's worth, I find the Quinn character's argumentative approach very frustrating to read. This makes it hard to take anything that character describes seriously.)
comment by Scott Alexander (Yvain) · 2019-07-19T20:28:53.423Z · LW(p) · GW(p)
Instead of Quinn admitting lying is sometimes good, I wish he had said something like:
“PADP is widely considered a good charity by smart people who we trust. So we have a prior on it being good. You’ve discovered some apparent evidence that it’s bad. So now we have to combine the prior and the evidence, and we end up with some percent confidence that they’re bad.
If this is 90% confidence they’re bad, go ahead. What if it’s more like 55%? What’s the right action to take if you’re 55% sure a charity is incompetent and dishonest (but 45% chance you misinterpreted the evidence)? Should you call them out on it? That’s good in the world where you’re right, but might disproportionately tarnish their reputation in the world where they're wrong. It seems like if you’re 55% sure, you have a tough call. You might want to try something like bringing up your concerns privately with close friends and only going public if they share your opinion, or asking the charity first and only going public if they can’t explain themselves. Or you might want to try bringing up your concerns in a nonconfrontational way, more like ‘Can anyone figure out what’s going on with PADP’s math?’ rather than ‘PADP is dishonest’. After this doesn’t work and lots of other people confirm your intuitions of distrust, then your confidence reaches 90% and you start doing things more like shouting ‘PADP is dishonest’ from the rooftops.
Or maybe you’ll never reach 90% confidence. Many people think that climate science is dishonest. I don’t doubt many of them are reporting their beliefs honestly - that they’ve done a deep investigation and that’s what they’ve concluded. It’s just that they’re not smart, informed, or rational enough to understand what’s going on, or to process it in an unbiased way. What advice would you give these people about calling scientists out on dishonesty - again given that rumors are powerful things and can ruin important work? My advice to them would be to consider that they may be overconfident, and that there needs to be some intermediate ‘consider my own limitations and the consequences of my irreversible actions’ step in between ‘this looks dishonest to me’ and ‘I will publicly declare it dishonest’. And that step is going to look like an appeal to consequences, especially if the climate deniers are so caught up in their own biases that they can't imagine they might be wrong.
I don’t want to deny that calling out apparent dishonesty when you’re pretty sure of it, or when you’ve gone through every effort you can to check it and it still seems bad, will sometimes (maybe usually) be the best course, but I don’t think it’s as simple as you think.”
...and seen what Carter answered.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-07-19T22:53:41.037Z · LW(p) · GW(p)
Part of this is pretty close to what I wrote [LW(p) · GW(p)] in the actual debate. The part about climate science is new though and I'd like to see a response to it.
Replies from: dxu↑ comment by dxu · 2019-07-25T17:44:06.908Z · LW(p) · GW(p)
The part about climate science seems like a pretty bog-standard outside view argument, which in turn means I find it largely uncompelling. Yes, there are people who are so stupid, they can only be saved from their own stupidity by executing an epistemic maneuver that works regardless of the intelligence of the person executing it. This does not thereby imply that everyone should execute the same maneuver, including people who are not that stupid, and therefore not in need of saving. If someone out there is so incompetent that they mistakenly perceive themselves as competent, then they are already lost, and the fact that an illegal (from the perspective of normative probability theory) epistemic maneuver exists which would save them if they executed it, does not thereby make that maneuver a normatively good move. (And even if it were, it's not as though the people who would actually benefit from said maneuver are going to execute it--the whole reason that such people are loudly, confidently mistaken is that they don't take the outside view seriously.)
In short: there is simply no principled justification for modesty-based arguments [LW · GW], and--though it may be somewhat impolite to say--I agree with Eliezer that people who find such arguments compelling are actually being influenced by social modesty norms (whether consciously or unconsciously), rather than any kind of normative judgment. Based on various posts that Scott has written in the past, I would venture to say that he may be one of those people.
comment by romeostevensit · 2019-07-18T07:10:19.140Z · LW(p) · GW(p)
The bar that is set for appeals to consequences imply the sort of equilibrium world you'll end up in. Erring on the side of higher is better, because it is hard to go the other way because epistemic standards tend to slide in the face of local incentives.
I also want to note an argumentative tactic that occurs on the tacit level whereby people will push you into a state where you need to expend more energy on average per truth bit than they do, so they eventually win by attrition. Related to evaporative cooling. The subjective experience of this feels like talking to the cops. You sense that no big wins are available (because they have their bottom line) but big losses are, so you stop talking. If you've encountered this dynamic, you recognize things like this
> "You still haven't refuted my argument. If you don't do so, I win by default."
as part of the supporting framework for the dynamic and it will make you very angry...which others will then use as part of the dynamic which makes you angry which......
comment by Eli Tyre (elityre) · 2019-11-25T20:25:20.182Z · LW(p) · GW(p)
When someone is arguing that a belief of yours is wrong, listen to the argument at the object level, instead of jumping to the question of whether saying the relevant arguments out loud is a good idea, which is a much harder question.”
It seems to me that they key issue here is the need for both public and private conversational spaces.
In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we're all fighting / negotiating over. In those contexts it is reasonable (I don't know if it is correct, or not), to constrain what things you say, even if they're true, because of their consequences. It is often the case that one piece of information, though true, taken out of context, does more harm than good, and often conveying the whole informational context to a large group of people is all but impossible.
But we need to be able to figure out which policies to support, somehow, separately from supporting them on this political battlefield. We also need private spaces, where we can think and our initial thoughts can be isolated from their possible consequences, or we won't be able to think freely.
It seems like Carter thinks they are having a private conversation, in a private space, and Quinn thinks they're having a public conversation in a public space.
Replies from: Zack_M_Davis, jessica.liu.taylor, elityre↑ comment by Zack_M_Davis · 2019-11-25T21:15:21.535Z · LW(p) · GW(p)
(Strong-upvoted for making something explicit that is more often tacitly assumed. Seriously, this is an incredibly useful comment; thanks!!)
In public spaces, arguments are soldiers. They have to be, because others treat them that way, and because there are actual policies that we're all fighting / negotiating over
Can you unpack [LW · GW] what you mean by "have to be" in more detail? What happens if you just report your actual reasoning [LW · GW] (even if your voice trembles [LW · GW])? (I mean that as a literal what-if question, not a rhetorical one. If you want, I can talk about how I would answer this in a future comment.)
I can imagine creatures living in a hyper-Malthusian Nash equilibrium [LW(p) · GW(p)] where the slightest deviation from the optimal negotiating stance dictated by the incentives [LW · GW] just gets you instantly killed and replaced with someone else who will follow the incentives. In this world, if being honest isn't the optimal negotiating stance, then honesty is just suicide. Do you think this is a realistic description of life for present-day humans? Why or why not? (This is kind of a leading question on my part. Sorry.)
But we need to be able to figure out which policies to support, somehow, separately from supporting them on this political battlefield.
The problem with this is that private deliberation is extremely dependent on public information; misinformation has potentially drastic ripple effects [LW · GW]. You might think you can sit in your room with an encyclopedia, figure out the optimal cause area, and compute the optimal propaganda for that cause ... but if the encyclopedia authors are following the same strategy, then your encyclopedia is already full of propaganda.
Replies from: elityre, elityre↑ comment by Eli Tyre (elityre) · 2019-11-27T18:52:14.739Z · LW(p) · GW(p)
Seriously, this is an incredibly useful comment; thanks!
Huh. Can you say why?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-11-30T18:18:58.783Z · LW(p) · GW(p)
You're clearly and explicitly advocating for a policy I think is abhorrent. This is really valuable, because it gives me a chance to argue that the policy is abhorrent, and potentially change your mind (or those of others in the audience who agree with the policy).
I want to make sure you get socially-rewarded for clearly and explicitly advocating for the abhorrent policy (thus the strong-upvote, "thanks!!", &c.), because if you were to get punished instead, you might think, "Whoops, better not say that in public so clearly", and then secretly keep on using the abhorrent policy.
Obviously—and this really should just go without saying—just because I think you're advocating something abhorrent doesn't mean I think you're abhorrent. People make mistakes! Making mistakes is OK as long as there exists enough optimization pressure to eventually correct mistakes. If we're honest with each other about our reasoning, then we can help correct each other's mistakes! If we're honest with each other about our reasoning in public, then even people who aren't already our closest trusted friends can help us correct our mistakes!
↑ comment by Eli Tyre (elityre) · 2019-11-27T18:40:14.226Z · LW(p) · GW(p)
Can you unpack [LW · GW]what you mean by "have to be" in more detail? What happens if you just report your actual reasoning [LW · GW](even if your voice trembles) [LW · GW]?
Well, I think the main thing is that this depends on onlookers having the ability, attention, and motivation to follow the actual complexity of your reasoning, which is often a quiet unreasonable assumption.
Usually, onlookers are going to round off what you're saying to something simpler. Sometimes your audience has the resources to actually get on the same page with you, but that is not the default. If you're not taking that dynamic into account, then you're just shooting yourself in the foot.
Many of the things that I believe are nuanced, and nuance doesn't travel well in the public sphere, where people will overhear one sentence out of context (for instance), and then tell their friends what "I believe." So tact requires that I don't say those things, in most contexts.
To be clear, I make a point to be honest, and I am not suggesting that you should ever outright lie.
You might think you can sit in your room with an encyclopedia, figure out the optimal cause area, and compute the optimal propaganda for that cause ... but if the encyclopedia authors are following the same strategy, then your encyclopedia is already full of propaganda.
This does not seem right to me, so it seems like one of us is missing the other somehow.
Replies from: Zack_M_Davis
↑ comment by Zack_M_Davis · 2019-11-30T18:20:59.904Z · LW(p) · GW(p)
Okay, I was getting too metaphorical with the encyclopedia; sorry about that. The proposition I actually want to defend is, "Private deliberation is extremely dependent on public information." This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you've heard in public discourse, rather than things you've directly seen and verified for yourself. But if everyone in Society is, like you, simplifying their public arguments in order to minimize their social "attack surface", then the information you bring to your private discussion is based on fear-based simplifications, rather than the best reasoning humanity has to offer.
In the grandparent comment, the text "report your actual reasoning" is a link to the Sequences post "A Rational Argument" [LW · GW], which you've probably read. I recommend re-reading it.
If you omit evidence against your preferred conclusion, people can't take your reasoning at face value anymore: if you first write at the bottom of a piece of paper [LW · GW], "... and therefore, Policy P is the best," it doesn't matter what you write on the lines above.
A similarly catastrophic, but not identical, distortion occurs when you omit evidence that "someone might take the wrong way." If your actual bottom line is, "And therefore I'm a Good Person who definitely doesn't believe anything that could look Bad if taken out of context," well, that might be a safe life decision for you, but then it's not clear why I should pay attention to anything else you say.
If you're not taking that dynamic into account, then you're just shooting yourself in the foot. [...] people will overhear one sentence out of context (for instance), and then tell their friends what "I believe."
Alternative metaphor: the people punishing you for misinterpretations of what you actually said are the ones shooting you in the foot. Those bastards! Maybe if we strategize about it together, there's some way to defy them, rather than accepting their tyrannical rule as inevitable?
To be clear, I make a point to be honest, and I am not suggesting that you should ever outright lie.
It depends on what "honest" means in this context. If "honest" just means "not telling conscious explicit unambiguous outright lies" then, sure, whatever. I think intellectual honesty is a much higher standard than that.
Replies from: Raemon, steven0461↑ comment by Raemon · 2019-12-03T23:49:06.868Z · LW(p) · GW(p)
(I'm not sure this comment is precisely a reply to the previous one, or more of a general reply to "things Zack has been saying for the past 6 months")
I notice that I basically by this point agree with some kind of "something about the overton window of norms should change in the direction Zack is pushing in", but it seems... like you're pushing more for an abstract principle than a concrete change, and I'm not sure how to evaluate it. I'd find it helpful if you got more specific about what you're pushing for.
I'd summarize my high-level understanding of the push you're making as:
1. "Geez, the appropriate mood for 'hmm, communicating openly and honestly in public seems hard' is not 'whelp, I guess we can't do that then'. Especially if we're going to call ourselves rationalists"
2. Any time that mood seems to cropping up or underlying someone's decision procedure it should be pushed back against.
[is that a fair high level summary?]
I think I have basically come to agree (or at least take quite seriously), point #1 (this is a change from 6 months ago). There are some fine details about where I still disagree with something about your approach, and what exactly my previous and new positions are/were. But I think those are (for now) more distracting than helpful.
My question is, what precise things do you want changed from the status quo? (I think it's important to point at missing moods, but implementing a missing mood requires actually operationalizing it into actions of some sort). I think I'd have an easier time interacting with this if I understood better what exact actions policies you're pushing for.
I see roughly two levels of things one might operationalize:
- Individual Action – Things that individuals should be trying to do (and, if you're a participant on LessWrong or similar spaces, the "price for entry" should be something like "you agree that you are supposed to be trying to do this thing"
- Norm Enforcement – Things that people should be commenting on, or otherwise acting upon, when they see other people doing
(you might split #2 into "things everyone should do" vs "things site moderators should do", or you might treat those as mostly synonomous)
Some examples of things you might mean by Individual Action are things like:
- "You[everyone] should be attempting to gain thicker skin" (or, different take: "you should try to cultivate an attitude wherein people criticizing your post doesn't feel like an attack")
- "You should notice when you have avoided speaking up about something because it was inconvenient." (Additional/alternate variants include: "when you notice that, speak up anyway", or "when you notice that, speak up, if the current rate at which you mention the inconvenient things is proportionately lower than the rate at which you mention convenient things")
Some examples of norm enforcement might be:
- "When you observe saying something false, or sliding goalposts around in a way that seems dishonest, say so" (with sub-options for how to go about saying so, maybe you say they are lying, or motivated, or maybe you just focus on the falseness).
- "When you observe someone systematically saying true-things that seem biased, say so"
Some major concerns/uncertainties of mine are:
1. How do you make sure that you don't accidentally create a new norm which is "don't speak up at all" (because it's much easier to notice and respond to things that are happening, vs things that are not happening)
2. Which proposed changes are local strict improvements, that you can just start doing and having purely good effects, and which require multiple changes happening at once in order to have good effects. Or, which changes require some number of people to just be willing to eat some social cost until a new equilibrium is reached. (This might be fine, but I think it's easier to respond to concretely to a proposal with a clearer sense of what that social cost is. If people aren't willing to pay the cost, you might need a kickstarter for Inadequate Equilibria [LW · GW])
Both concerns seem quite addressable, just, require some operationalization to address.
For me to implement changes in myself (either as a person aspiring to be a competent truthseeking community member, or as a perhap helping to maintain a competent truthseeking culture), ideally need to be specified in some kind of Trigger-Action form. (This may not be universally true, some people get more mileage out of internal-alignment shifts rather than habit changes, but I personally find the latter much more helpful)
Replies from: Zack_M_Davis, mr-hire↑ comment by Zack_M_Davis · 2019-12-04T04:38:37.607Z · LW(p) · GW(p)
you're pushing more for an abstract principle than a concrete change
I mean, the abstract principle that matters is of the kind that can be proved as a theorem rather than merely "pushed for." If a lawful physical process results in the states of physical system A becoming correlated with [LW · GW] the states of system B, and likewise system B and system C, then observations of the state of system C are evidence about the state of system A. I'm claiming this as technical knowledge, not a handwaved philosophical intuition; I can write literal computer programs [LW · GW] that exhibit this kind of evidential-entanglement relationship.
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn't work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
Any time that mood seems to cropping up or underlying someone's decision procedure it should be pushed back against.
The word "should" definitely doesn't belong here. Like, that's definitely a fair description of the push I'm making. Because I actually feel that way. But obviously, other people shouldn't passionately advocate for open and honest discourse if they're not actually passionate about open and honest discourse: that would be dishonest!
I think I'd have an easier time interacting with this if I understood better what exact actions policies you're pushing for.
I mean, you don't have to interact with it if you don't feel like it! I'm not the boss of anyone!
Replies from: Raemon, mr-hire↑ comment by Raemon · 2019-12-04T21:08:43.611Z · LW(p) · GW(p)
But obviously, other people shouldn't passionately advocate for open and honest discourse if they're not actually passionate about open and honest discourse: that would be dishonest!
The unpacked "should" I imagined you implying was more like "If you do not feel it is important to have open/honest discourse, you are probably making a mistake. i.e. it's likely that you're not noticing the damage you're doing and if you really reflected on it honestly you'd probably "
Notably, the process whereby you can use your observations about C to help make better predictions about A doesn't work if system B is lying to make itself look good. I again claim this as technical knowledge, and not a political position.
That part is technical knowledge (and so is the related "the observation process doesn't work [well] if system B is systematically distorting things in some way, whether intentional or not."). And I definitely agree with that part and expect Eli does to and generally don't think it's where the disagreement lives.
But, you seem to have strongly implied, if not outright stated, that this isn't just an interesting technical fact that exists in isolation, it implies an optimal (or at least improved) policy that individuals and groups make make to improve their truthseeking capability. This implies we (at least, rationalists with roughly similar background assumptions as you) should be doing something differently than they currently are doing. And, like, it actually matters what that thing is.
There is some fact of the matter about what sorts of interacting systems can make the best predictions and models.
There is a (I suspect different) fact of the matter of what the optimal systems you can implement on humans look like, and yet another quite different fact of the matter of what improvements are possible on LessWrong-in-particular given our starting conditions, and what is the best way to coordinate on them. They certainly don't seem like they're going to come about by accident.
There is a fact of the matter of what happens if you push for "thick skin" and saying what you mean without regard for politeness – maybe it results in a community that converges on truth faster (by some combination of distorting less when you speak, or by spending less effort on communication or listening). Or maybe it results in a community that converges on truth slower because it selected more for people who are conflict-prone than people who are smart. I don't actually know the answer here, and the answer seems quite important.
Early LessWrong had a flaw (IMO) regarding instrumental rationality – there is also a fact of the matter of what an optimal AI decisionmaker would do if they were running on a human-brain worth of compute. But, this is quite different from what kind of decisionmaking works best implemented on typical human wetware, and failure to understand this resulted in a lot of people making bad plans and getting depressed because the plans they made were actually impossible to run.
I mean, you don't have to interact with it if you don't feel like it! I'm not the boss of anyone!
Sure, but, like, I want to interact with it (both individually and as a site moderator) because I think it's pointing in an important direction. You've noted this as a something I should probably pay special attention to. [LW(p) · GW(p)] And, like, I think you're right, so I'm trying to pay special attention to it.
↑ comment by Matt Goldenberg (mr-hire) · 2019-12-04T14:21:31.732Z · LW(p) · GW(p)
The word "should" definitely doesn't belong here. Like, that's definitely a fair description of the push I'm making. Because I actually feel that way. But obviously, other people shouldn't passionately advocate for open and honest discourse if they're not actually passionate about open and honest discourse: that would be dishonest!
This seems to me like you're saying "people shouldn't have to advocate for being open and honest because people should be open and honest"
And then the question becomes... If you think it's true that people should be open and honest, do you have policy proposals that help that become true?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-04T15:15:32.447Z · LW(p) · GW(p)
Not really? The concept of a "policy proposal" seems to presuppose control over some powerful central decision node, which I don't think is true of me. This is a forum website. I write things. Maybe someone reads them. Maybe they learn something. Maybe me and the people who are better at open and honest discourse preferentially collaborate with each other (and ignore people who we can detect are playing a different game), have systematically better ideas, and newcomers tend to imitate our ways in a process of cultural evolution.
Replies from: Raemon, mr-hire↑ comment by Raemon · 2019-12-04T21:01:13.939Z · LW(p) · GW(p)
I separated out the question of "stuff individuals should do unilaterally" from "norm enforcement" because it seems like at least some stuff doesn't require any central decision nodes.
In particular, while "don't lie" is an easy injunction to follow, "account for systematic distortions in what you say" is actually quite computationally hard, because there are a lot of distortions with different mechanisms and different places one might intervene on their thought process and/or communication process. "Publicly say literally ever inconvenient thing you think of" probably isn't what you meant (or maybe it was?), and it might cause you to end not having a harder time thinking inconvenient thoughts [EA · GW].
I'm asking because I'm actually interested in improving on this dimension.
Replies from: Raemon↑ comment by Raemon · 2019-12-04T21:07:33.060Z · LW(p) · GW(p)
(some current best guesses of mine are, at least for my own values, are:
- "Practice noticing heretical thoughts I think and actually notice what things you can't say, without obligating yourself to say them, so that you don't accidentally train yourself not to think them"
- "Practice noticing opportunities to exhibit social courage, either in low stakes situations, or important situations. Allocate some additional attention towards practicing social courage as skill/muscle" (it's unclear to me how much to prioritize this, because there's two separate potential models of 'social/epistemic courage is a muscle' and 'social/epistemic courage is a resource you can spend, but you risk using up people's willingness to listen to you, as well a "most things one might be courageous about actually aren't important and you'll end up spending a lot of effort on things that don't matter")
But, I am interested in what you actually do within your own frame/value setup.
↑ comment by Matt Goldenberg (mr-hire) · 2019-12-04T17:13:20.510Z · LW(p) · GW(p)
? The concept of a "policy proposal" seems to presuppose control over some powerful central decision node,
I'm more interested, as the person who has been the powerful central decision node at multiple times in my life, and will likely be in the future (and as someone who is interested in institution design in general) in if you have suggestions for how to make this work in new or existing institutions. For instance, some of the ideas I've shared elsewhere on radical transparency norms seem one way to go about this.
I think cultural evolution and the marketplace of ideas seems like a good idea, but memetics unfortunately select for other things than just truth, and relying on memetics to propagate truth norms (if indeed the propagation of truth norms is good) feels insufficient.
↑ comment by Matt Goldenberg (mr-hire) · 2019-12-04T03:42:05.297Z · LW(p) · GW(p)
I would love to see a summary what particular arguments of Zach's changed your mind, and how it changed over time.
↑ comment by steven0461 · 2019-12-03T18:52:52.862Z · LW(p) · GW(p)
The proposition I actually want to defend is, "Private deliberation is extremely dependent on public information." This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you've heard in public discourse, rather than things you've directly seen and verified for yourself.
Most of the harm here comes not from public discourse being filtered in itself, but from people updating on filtered public discourse as if it were unfiltered. This makes me think it's better to get people to realize that public discourse isn't going to contain all the arguments than to get them to include all the arguments in public discourse.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-03T22:20:45.083Z · LW(p) · GW(p)
I agree that that's much less bad [LW · GW]—but "better"? "Better"!? By what standard? What assumptions are you invoking without stating them?
I should clarify: I'm not saying submitting to censorship is never the right thing to do. If we live in Amazontopia, and there's a man with a gun on the streetcorner who shoots anyone who says anything bad about Jeff Bezos, then indeed, I would not say anything bad about Jeff Bezos—in this specific (silly) hypothetical scenario with that specific threat model.
But ordinarily, when we try to figure out which cognitive algorithms [LW · GW] are "better" (efficiently produce accurate maps, or successful plans), we tend to assume a "fair" problem class unless otherwise specified. The theory of "rational thought, except you get punished if you think about elephants" is strictly more complicated than the theory of "rational thought." Even if we lived in a world where robots with MRI machines who punish elephant-thoughts were not unheard of and needed to be planned for, it would be pedagogically weird to treat that as the central case.
I hold "discourse algorithms" to the same standard: we need to figure out how to think together in the simple, unconstrained case before we have any hope of successfully dealing with the more complicated problem of thinking together under some specific censorship threat.
I am not able to rightly apprehend what kind of brain damage has turned almost everyone I used to trust [LW · GW] into worthless cowards who just assume as if it were a law of nature that discourse is impossible—that rank and popularity are more powerful than intelligence. Is the man on the streetcorner actually holding a gun, or does he just flash his badge and glare at people? Have you even looked?
Most of the harm
Depends on the problem you're facing. If you just want accurate individual maps, sufficiently smart Bayesians can algorithmically "back out" the effects of censorship. But what if you actually need common knowledge [LW · GW] for something?
Replies from: steven0461↑ comment by steven0461 · 2019-12-04T21:30:52.997Z · LW(p) · GW(p)
we need to figure out how to think together
This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we're limited by temperament rather than understanding. I agree that if we're trying to think about how to think together we can treat no censorship as the default case.
worthless cowards
If cowardice means fear of personal consequences, this doesn't ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don't do it is because I'd feel guilt about harming the discourse. This motivation doesn't disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.
who just assume as if it were a law of nature that discourse is impossible
I don't know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far real discourse would diverge from fake discourse because you assume real discourse is possible and interpret too much existing discourse as real discourse.
But what if you actually need common knowledge [LW · GW] for something?
Then that's a reason to try to create common knowledge, whether privately or publicly. I think ordinary knowledge is fine most of the time, though.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-04T22:49:26.061Z · LW(p) · GW(p)
The reason why I mostly don't do it is because I'd feel guilt about harming the discourse
Woah, can you explain this part in more detail?! Harming the discourse how, specifically? If you have thoughts, and your thoughts are correct, how does explaining your correct thoughts make things worse?
Replies from: steven0461, Raemon↑ comment by steven0461 · 2019-12-05T00:01:45.377Z · LW(p) · GW(p)
Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I'm also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-05T01:26:07.265Z · LW(p) · GW(p)
I want to distinguish between "harming the discourse" and "harming my faction in a marketing war."
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance), then other people who aren't already your closest trusted friends have the opportunity to learn from the arguments and evidence that actually convinced you, combine it with their own knowledge, and potentially make better decisions. ("Discourse" might not be the right word here—the concept I want to point to includes unilateral truthtelling, as on a blog with no comment section, or where your immediate interlocutor doesn't "reciprocate" in good faith, but someone in the audience might learn something.)
If you think other people can't process arguments at all, but that you can, how do you account for your own existence? For myself: I'm smart, but I'm not that smart (IQ ~130). The Sequences were life-changingly great, but I was still interested in philosophy and argument before that. Our little robot cult does not have a monopoly on reasoning itself.
a lot of people will respond by updating against the prospect of advanced AI
Sure. Those are the people who don't matter. Even if you could psychologically manipulate [revised [LW(p) · GW(p)]: persuade] them into having the correct bottom-line [LW · GW] "opinion" [? · GW], what would you do with them? Were you planning to solve the alignment problem by lobbying Congress to pass appropriate legislation?
↑ comment by Vaniver · 2019-12-05T02:15:08.099Z · LW(p) · GW(p)
When I say that public discourse is really important, what I mean is that if you tell the truth in public about what you believe and why (possibly investing a lot of effort and using a lot of hyperlinks to bridge the inferential distance)
I want to agree with the general point here, but I find it breaking down in some of the cases I'm considering. I think the underlying generator is something like "communication is a two-way street", and it makes sense to not just emit sentences that compile and evaluate to 'true' in my ontology, but that I expect to compile and evaluate to approximately what I wanted to convey in their ontology.
Does that fall into 'harming my faction in a marketing war' according to you?
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-05T17:08:15.158Z · LW(p) · GW(p)
No, I agree that authors should write in language that their audience will understand. I'm trying to make a distinction between having intent to inform (giving the audience information that they can use to think with) vs. persuasion (trying to exert control over the audience's conclusion). Consider this generalization of a comment upthread—
Consider the idea that X implies Y. I think this is a perfectly correct point, but I'm also willing to never make it, because a lot of people will respond by concluding that not-X, because they're emotionally attached to not-Y, and I care a lot more about people having correct beliefs about the truth value of X than Y.
This makes perfect sense as part of a consequentialist algorithm for maximizing the number of people who believe X. The algorithm works just as well, and for the same reasons whether X = "superintelligence is an existential risk" and Y = "returns from stopping global warming are smaller than you might otherwise think" (when many audience members have global warming "cause-area loyalty"), or whether X = "you should drink Coke" and Y = "returns from drinking Pepsi are smaller than you might otherwise think" (when many audience members have Pepsi brand loyalty). That's why I want to call it a marketing algorithm—the function is to strategically route around the audience's psychological defenses, rather than just tell them stuff as an epistemic peer.
To be clear, if you don't think you're talking to an epistemic peer, strategically routing around the audience's psychological defenses might be the right thing to do! For an example that I thought was OK because I didn't think it significantly distorted the discourse, see my recent comment explaining an editorial choice I made in a linkpost description [LW(p) · GW(p)]. But I think that when one does this, it's important to notice the nature of what one is doing (there's a reason my linked comment uses the phrase "marketing keyword"!), and track how much of a distortion it is relative to how you would talk to an epistemic peer. As you know, quality of discourse is about the conversation executing an algorithm that reaches truth, not just convincing people of the conclusion that (you think) is correct. That's why I'm alarmed at the prospect of someone feeling guilty (!?) that honestly reporting their actual reasoning might be "harming the discourse" (!?!?).
Replies from: Vaniver, elityre↑ comment by Vaniver · 2019-12-07T18:23:30.663Z · LW(p) · GW(p)
"Intent to inform" jives with my sense of it much more than "tell the truth."
On reflection, I think the 'epistemic peer' thing is close but not entirely right. Definitely if I think Bob "can't handle the truth" about climate change, and so I only talk about AI with Bob, then I'm deciding that Bob isn't an epistemic peer. But if I have only a short conversation with Bob, then there's a Gricean implication point that saying X implicitly means I thought it was more relevant to say than Y, or is complete, or so on, and so there are whole topics that might be undiscussed because I don't want to send the implicit message that my short thoughts on the matter are complete enough to reconstruct my position or that this topic is more relevant than other topics.
---
More broadly, I note that I often see "the discourse" used as a term of derision, I think because it is (currently) something more like a marketing war than an open exchange of information. Or, like a market left to its own devices, it has Goodharted on marketing. It is unclear to me whether it's better to abandon it (like, for example, not caring about what people think on Twitter) or attempt to recapture it (by pushing for the sorts of 'public goods' and savvy customers that cause markets to Goodhart less on marketing).
↑ comment by Eli Tyre (elityre) · 2024-08-27T19:15:08.098Z · LW(p) · GW(p)
To be clear, if you don't think you're talking to an epistemic peer, strategically routing around the audience's psychological defenses might be the right thing to do!
I'm confused reading this.
It seems to me that if you think routing around psychological defenses is a sometimes reasonable thing to do with people who aren't your epistemic peers.
But you said above that you thought the overall position of having private discourse spaces and public discourse spaces is abhorrent?
How do these fit together? The the vast majority of people are not your (or my) epistemic peers, even the robot cult doesn't have a monopoly on truth or truth seeking. And so you would behave differentely in private spaces with your peers, and public spaces that include the whole world.
Can you clarify?
↑ comment by Zack_M_Davis · 2024-08-28T15:11:46.051Z · LW(p) · GW(p)
It's a fuzzy Sorites-like distinction, but I think I'm more sympathetic to trying to route around a particular interlocutor's biases in the context of a direct conversation with a particular person (like a comment or Tweet thread) than I am in writing directed "at the world" (like top-level posts), because the more something is directed "at the world", the more you should expect that many of your readers know things that you don't, such that the humility argument for honesty applies forcefully.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2024-08-28T19:48:30.333Z · LW(p) · GW(p)
FWIW, I have the opposite inclination. If I'm talking with a person one-on-one, we have high bandwidth. I will try to be skillful and compassionate in avoiding triggering them, while still saying what's true, and depending on the who I'm talking to, I may elect to remain silent about some of the things that I think are true.
But I overall am much more uncomfortable with anything less than straightforward statements of what I believe and why, in smaller-person contexts, where there is the communication capacity to clarify misunderstandings, and where my declining to offer an objection to something that someone says more strongly implies agreement.
the more you should expect that many of your readers know things that you don't
This seems right to me.
But it also seem right to me that the broader your audience the lower their average level of epistemics and commitment to epistemic discourse norms. And your communication bandwidth is lower.
Which means there is proportionally more risk of 1) people mishearing you and that damaging the prospects of the policies you want to advocate for (eg "marketing"), 2) people mishearing you, and that causing you personal problems of various stripes, and 3) people understanding you correctly, and causing you personal problems of various stripes.
[1]
So the larger my audience the more reticent I might be about what I'm willing to say.
- ^
There's obviously a fourth quadrant of that 2-by-2, "people hearing you correctly and that damaging the prospects of the policies you want to advocate for."
Acting to avoid that seems commons destroying, and personally out of integrity. If my policy proposals have true drawbacks, I want to clearly acknowledge them and state why I think they're worth it, not disemble about them.
↑ comment by steven0461 · 2019-12-05T03:20:38.975Z · LW(p) · GW(p)
Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don't always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one's side, and is even more importantly different from anything similar to what pops into people's minds when they hear "psychological manipulation". If I'm worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it's good to press hypertech buttons under because they've always vaguely heard that set of thoughts is disreputable and so never looked into it, I don't think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let's still talk some time.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-06T02:51:23.014Z · LW(p) · GW(p)
even more importantly different from anything similar to what pops into people's minds when they hear "psychological manipulation"
That's fair. Let me scratch "psychologically manipulate", edit to "persuade", and refer to my reply to Vaniver [LW(p) · GW(p)] and Ben Hoffman's "The Humility Argument for Honesty" (also the first link in the grandparent) for the case that generic persuasion techniques are (counterintuitively!) Actually Bad.
I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging
I don't think it's the long-form medium so much as it is the fact that I am on a personal vindictive rampage against appeals-to-consequences lately. You should take my vindictiveness into account if you think it's biasing me!
↑ comment by Eli Tyre (elityre) · 2024-08-27T19:05:45.173Z · LW(p) · GW(p)
Were you planning to solve the alignment problem by lobbying Congress to pass appropriate legislation?
Um. Yes, as of 2024, lobbying congress to get an AI scaling ban, to buy time to solve the technical problem is now part of the plan.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2024-08-28T15:17:39.330Z · LW(p) · GW(p)
2019 was a more innocent time. I grieve what we've lost.
↑ comment by Raemon · 2019-12-04T23:29:18.791Z · LW(p) · GW(p)
One potential reason is Idea Innoculation + Inferential Distance.
↑ comment by jessicata (jessica.liu.taylor) · 2019-11-26T03:09:29.579Z · LW(p) · GW(p)
In those contexts it is reasonable (I don’t know if it is correct, or not), to constrain what things you say, even if they’re true, because of their consequences.
This agrees with Carter:
So, of course you can evaluate consequences in your head before deciding to say something.
Carter is arguing that appeals to consequences should be disallowed at the level of discourse norms, including public discourse norms. That is, in public, "but saying that has bad consequences!" is considered invalid.
It's better to fight on a battlefield with good rules than one with bad rules.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-27T19:08:14.267Z · LW(p) · GW(p)
Hmm...something about that seems not quite right to me. I'm going to see if I can draw out why.
Carter is arguing that appeals to consequences should be disallowed at the level of discourse norms, including public discourse norms. That is, in public, "but saying that has bad consequences!" is considered invalid.
The thing at stake for Quinn_Eli is not whether or not this kind of argument is "invalid". It's whether or not she has the affordance to make a friendly, if sometimes forceful, bid to bring this conversation into a private space, to avoid collateral damage.
(Sometimes of course, the damage won't be collateral. If in private discussion, Quinn concludes, to the best of her ability to reason, that, in fact, it would be good if fewer people donated to PADP, she might then give that argument in public. And if others make bids to say explore that privately, at that stage, she might respond, "No. I am specifically arguing that onlookers should donate less to PADP (or think that decreasing their donations is a reasonable outcome of this argument). That isn't accidental collateral damage. It's the thing that's at stake for me right now.")
I don't know if you already agree with what I'm saying here.
. . .
It's better to fight on a battlefield with good rules than one with bad rules.
I don't think we get to pick the rules of the battlefield. The rules of the battlefield are defined only by what causes one to win. Nature alone chooses the rules.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-11-27T22:11:42.779Z · LW(p) · GW(p)
Bidding to move to a private space isn't necessarily bad but at the same time it's not an argument. "I want to take this private" doesn't argue for any object-level position.
It seems that the text of what you're saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don't think you actually believe that. Perhaps you've given up on affecting them, though.
("What wins" is underdetermined given choice is involved in what wins; you can't extrapolate from two player zero sum games (where there's basically one best strategy) to multi player zero sum games (where there isn't, at least due to coalitional dynamics implying a "weaker" player can win by getting more supporters))
Replies from: Raemon, elityre↑ comment by Raemon · 2019-11-28T00:36:01.885Z · LW(p) · GW(p)
It seems that the text of what you're saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don't think you actually believe that.
How much agency we have is proportional to how many other actors are in a space. I think it's quite achievable (though requires a bit of coordination) to establish good norms for a space with 100 people. It's still achievable, but... probably at least (10x?) as hard to establish good norms for 1000 people.
But "public searchable internet" is immediately putting things in in a context with at least millions if not billions of potentially relevant actors, many of whom don't know anything about your norms. I'm still actually fairly optimistic about making important improvements to this space, but those improvements will have a lot of constraints for anyone with major goals that affect the world-stage.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-28T00:51:01.733Z · LW(p) · GW(p)
Yes. This, exactly. Thank you for putting it so succinctly.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-28T01:08:49.520Z · LW(p) · GW(p)
Furthermore, you have a lot more ability to enforce norms regarding what people say, as opposed to norms about how people interpret what people say.
↑ comment by Eli Tyre (elityre) · 2019-11-27T23:20:26.032Z · LW(p) · GW(p)
It seems that the text of what you're saying implies you think humans have no agency over discourse norms, regulations, rules of games, etc, but that seems absurd so I don't think you actually believe that. Perhaps you've given up on affecting them, though.
I do think that is possible and often correct to push for some discourse norms over others. I will often reward moves that I think are good, and will sometimes challenge moves that I think are harmful to our collective epistemology.
But I don't think that I have much ability to "choose" how other people will respond to my speech acts. The world is a lot bigger than me, and it would be imprudent to miss-model that fact that, for instance, many people will not or cannot follow some forms of argument, but will just round what you're saying to the closest thing that they can understand. And that this can sometimes cause damage.
(I think that you must agree with this? Or maybe you think that you should refuse to engage in groups where the collective epistemology can't track nuanced argument? I don't think I'm getting you yet.)
Bidding to move to a private space isn't necessarily bad but at the same time it's not an argument. "I want to take this private" doesn't argue for any object-level position
I absolutely agree.
I think the main thing I want to stand for here is both that obviously the consequences of believing or saying a statement have no bearing on its truth value (except in unusual self-fulfilling prophecy edge cases), and it is often reasonable to say "Hey man, I don't think you should say that here in this context where bystanders will overhear you."
I'm afraid that those two might being conflated, or that one is is being confused for the other (not in this dialogue, but in the world).
To be clear, I'm not sure that I'm disagreeing with you. I do have the feeling that we are missing each other somehow.
Replies from: jessica.liu.taylor, Raemon
↑ comment by jessicata (jessica.liu.taylor) · 2019-11-28T03:17:40.122Z · LW(p) · GW(p)
I think that you must agree with this?
Yes, and Carter is arguing in a context where it's easy to shift the discourse norms, since there are few people present in the conversation.
LW doesn't have that many active users, it's possible to write posts arguing for discourse norms, sometimes to convince moderators they are good, etc.
and it is often reasonable to say “Hey man, I don’t think you should say that here in this context where bystanders will overhear you.”
Sure, and also "that's just your opinion, man, so I'll keep talking" is often a valid response to that. It's important not to bias towards saying exposing information is risky while hiding it is not.
↑ comment by Raemon · 2019-11-28T00:27:31.693Z · LW(p) · GW(p)
But I do think that I have much ability to "choose" how other people will respond to my speech acts
I think you meant 'do not think'?
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-11-28T00:44:24.587Z · LW(p) · GW(p)
Yep. Fixed.
↑ comment by Eli Tyre (elityre) · 2019-11-25T20:32:03.098Z · LW(p) · GW(p)
Notably, many other commenters seem to be implicitly or explicitly pointing to the private vs. public distinction.
comment by Said Achmiz (SaidAchmiz) · 2019-07-18T05:24:15.993Z · LW(p) · GW(p)
Well, I certainly agree with the position you’re defending. Yet I can’t help but feel that the arguments in the OP lack… a certain concentrated force, which I feel this topic greatly deserves.
Without disagreeing, necessarily, with anything you say, here is my own attempt, in two (more or less independent) parts.
The citadel of truth
If the truth is precious, its pursuit must be unburdened by such considerations as “what will happen if we say this”. This is impractical, in the general case. You may not be interested in consequences, after all, but the consequences are quite interested in you…
There is, however, one way out of the quagmire of consequential anxiety. Let there be a place around which a firewall of epistemology is erected. Let appeals to consequences outside that citadel, be banned within its walls. Let no one say: “if we say such a thing, why, think what might happen, out there, in the wider world!”. Yes, if you say this thing out there, perhaps unfortunate consequences may follow out there. But we are not speaking out there; so long as we speak in here, to each other, let us consider it irrelevant what effects our words may produce upon the world outside. In here, we concern ourselves only with truth. All that we do and say, within the citadel, serves only truth.
Any among us who have something to protect, in the world beyond the citadel, may wish to take the truths we find, and apply them to that outside world, and discuss these things with others who feel as they do. In these discussions, of plans and strategies for acting upon the wider world, the consequences of their words, for that world, may be of the utmost importance. But if so, to have such discussions, these planners will have to step outside the citadel’s walls. To talk in such a way in here—to speak, and to hold yourself and others to considering the consequences of their words upon the world—is to violate the citadel’s rule: that all that we do and say within, serves truth, and only truth.
Without evaluation, consequences have no meaning
The one comes to you and says: if you say such a thing, why, this-and-such will happen!
Well, and what of it? For this to be compelling, it is not enough that the consequences of speech be a certain way, but that you evaluate them a certain way. To find that “this-and-such will happen if you say that” is a reason for not saying it, you must evaluate the given consequence as negative, and furthermore you must weigh it against the other consequences of your speech act—all the other consequences—and judge that this one downside tilts the balance, in favor of silence.
And now suppose one comes to me, and says: Said, this thing you said—consider the consequences of saying it! They are bad. Well, perhaps they are. But consider, I respond, the consequences of allowing you, concerned citizen, to convince me, by this argument of yours, to keep silent. They include corruption of the truth, and the undermining of the search for it, and distortion of the accurate beliefs of everyone I speak to, and my own as well. These, too, are consequences. And I evaluate these effects to have so negative a value, that, short of either the inescapable annihilation of the human race (or outcomes even more dire), or serious personal harm to me or those close to me, no unpleasant consequence you might threaten me with could possibly compare. So I thank you—I continue—for your most sincere concern, but your admonition cannot move me.
In short: it is the case, I find, in most such arguments-from-consequences, that anyone who may be moved by them, does not value truth—does not, at any rate, value it enough that they may rightly be admitted to any circle of truth-seekers which pretends really to be serious in the pursuit of its goal.
Replies from: Raemon↑ comment by Raemon · 2019-11-28T01:58:32.769Z · LW(p) · GW(p)
Note: I had originally intended to write a response post to this called "Building the Citadel of Truth", basically arguing: "Yup, the Citadel of Truth sounds great. Let's build it. Here are my thoughts about the constraints and design principles that would need to go into constructing it"
For various reasons I didn't do that at the time (I think shortly afterwards I sort of burned out on the overall surrounding discourse). I might still do that someday.
I touch upon the issues in this comment, [LW(p) · GW(p)] which seems worth quoting here for now:
Replies from: SaidAchmizIdeally, if it's only OTHER people we're worried about social harm from (i.e. non-aspiring-espistemic-rationalists), we still get to talk about the thing to build a fully integrated worldmodel. One property that a Citadel of Truth should have is actually keeping things private from the outside world. (This is a solveable logistical problem, although you do have to actually solve it. It might be good for LW to enable posts to be hidden from non-logged out users, perhaps requiring some karma threshold to see taboo posts)
The hardest of hard modes is "local politics", where it's not just that I'm worried about nebulous "outsiders" hurting me (or friends feeling pressure to disown me because they in turn face pressure from outsiders). Instead, the issue is politics inside the citadel. It seems like a quite desirable property to able to talk freely about which local orgs and people deserve money and prestige – but I don't currently know of robust game mechanics that will actually, reliably enable this in any environment where I don't personally know and trust each person.
Having multiple "inner citadels" of trust is sort of the de-facto way this is done currently, in my experience. Having clearer signposting on how to get trustworthy might be a net improvement.
Notably, proclaiming "I only care about truth, not politics" is not sufficient for me to trust someone in this domain. I think that's a pretty bad litmus test.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-11-28T09:47:46.435Z · LW(p) · GW(p)
It seems like a quite desirable property to able to talk freely about which local orgs and people deserve money and prestige – but I don’t currently know of robust game mechanics that will actually, reliably enable this in any environment where I don’t personally know and trust each person.
There should not be any “local orgs” inside the citadel; and if the people who participate in the citadel also happen to, together, constitute various other orgs… well, first of all, that’s quite a bad sign; but in any case discussions of them, and whether they deserve money and so on, should not take place inside the citadel.
If this is not obvious, then I have not communicated the concept effectively. I urge you to once again consider this part:
Any among us who have something to protect, in the world beyond the citadel, may wish to take the truths we find, and apply them to that outside world, and discuss these things with others who feel as they do. In these discussions, of plans and strategies for acting upon the wider world, the consequences of their words, for that world, may be of the utmost importance. But if so, to have such discussions, these planners will have to step outside the citadel’s walls. To talk in such a way in here—to speak, and to hold yourself and others to considering the consequences of their words upon the world—is to violate the citadel’s rule: that all that we do and say within, serves truth, and only truth.
For this reason, I am of the strong opinion that any Citadel of Truth is best built online, and not integrated strongly into any “meatspace” community, and certainly not built within, or “on top of”, or by, any existing such community.
EDIT: The point is, it’s a Citadel of Truth, not—repeat, not!—a Citadel of Discovering the Truth And Then Doing Desirable Things With It, Because That Was Our Goal All Along, Wasn’t It. If that is what you’re trying to build, then forget it; the whole thing is corrupted from the get-go, and will come to no good.
Replies from: Raemon↑ comment by Raemon · 2019-11-28T20:44:14.960Z · LW(p) · GW(p)
EDIT: The point is, it’s a Citadel of Truth, not—repeat, not!—a Citadel of Discovering the Truth And Then Doing Desirable Things With It, Because That Was Our Goal All Along, Wasn’t It. If that is what you’re trying to build, then forget it; the whole thing is corrupted from the get-go, and will come to no good.
Okay, yeah the thing I'm thinking about is definitely different from the thing you're thinking about and I'll refrain from referring to my thing as "The Citadel of Truth".
[Edit: the thing-in-my-head still has a focus on "within the citadel-esque-thing, the primary sacred value is the truth, because to actually Use Truth to Do Desirable Things you need to actually to Focus On Truth For It's Own Sake, and yes, this is a bit contradictory, and I'm not 100% sure how to resolve the contradiction.
But, a citadel that's just focused on truth without paying attention to how that truth will actually get applied to anything, that doesn't attempt to resolve the contradiction, doesn't seem very interesting to me. That's not the hard part]
There should not be any “local orgs” inside the citadel; and if the people who participate in the citadel also happen to, together, constitute various other orgs… well, first of all, that’s quite a bad sign; but in any case discussions of them, and whether they deserve money and so on, should not take place inside the citadel.
I do think this is plausibly quite relevant to The Thing I'm Thinking of, independent of whether it's relevant to The Thing You're Thinking Of. Will think on that a bit.
I'm left with sort of a confused "what problem is your conception of the Citadel actually trying to solve", though?
The two main problems that the status quo face AFAICT (i.e. if you put down a flag and say "Truth!" and then some people show up and start talking, but nonetheless find that their talk isn't always truthtracking), is:
- There might be people Out There who dislike what you say, and harm or impose costs on you in some way
- There might be people In the Conversation who have some kind of stake in the conversation, that are motivated to warp it.
I... think the first one is relatively straightforward. (There are two primary strategies I can see, of either "deciding not to care", or "being somewhat private / obfuscated". I think the latter is a better strategy but if you're precommitting to "literally just focus truth with no optimization towards being able to use that truth later" I think the former strategy probably works fine)
For the second problem... well, if your solution is to filter/arrange things such that the citadel just doesn't have Local Politics, then this problem doesn't come up in the first place. The domains where this is coming up have to do with situations where Local Politics Is Already Here, and people wishing to be able to speak frankly despite that. The Citadel of Truth doesn't seem like it solves that problem at all. It just posits Somewhere Else there being a Citadel where people who don't have a stake in the Local Politics, having conversations that don't end up affecting the Local Politics.
Replies from: SaidAchmiz, SaidAchmiz, SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-11-29T00:20:32.394Z · LW(p) · GW(p)
The two main problems that the status quo face AFAICT (i.e. if you put down a flag and say “Truth!” and then some people show up and start talking, but nonetheless find that their talk isn’t always truthtracking), is:
There might be people Out There who dislike what you say, and harm or impose costs on you in some way
There might be people In the Conversation who have some kind of stake in the conversation, that are motivated to warp it.
Note that these problems are not separate, but in fact are inextricably linked. This is because people Out There can come In Here (and will absolutely attempt to do so, in proportion to how successful your Citadel becomes), and also people In Here may decide to interact with social forces Out There.
… situations where Local Politics Is Already Here, and people wishing to be able to speak frankly despite that. The Citadel of Truth doesn’t seem like it solves that problem at all.
Indeed, it does not. Nor is it meant to.
I’m left with sort of a confused “what problem is your conception of the Citadel actually trying to solve”, though?
Figuring out the truth. Note, as per my other comment [LW(p) · GW(p)], that we currently do not have any institutions that have just that as their goal. Really—none. (If you think that this claim is obviously wrong, then, as usual: provide examples!)
Replies from: Raemon↑ comment by Raemon · 2019-11-29T00:53:11.413Z · LW(p) · GW(p)
Note that these problems are not separate, but in fact are inextricably linked. This is because people Out There can come In Here (and will absolutely attempt to do so, in proportion to how successful your Citadel becomes), and also people In Here may decide to interact with social forces Out There.
...
Note, as per my other comment [LW(p) · GW(p)], that we currently do not have any institutions that have just that as their goal. Really—none. (If you think that this claim is obviously wrong, then, as usual: provide examples!)
...
It is the hard part. It really, really is.
I'm not sure our models here are that different. What I'd argue (not sure if you'd disagree), is something like:
We have no institutions whose sole goal is figure out the truth, but the reason for this is that to be an "institution" (as opposed to some random collection of people just quietly figuring stuff out) you need some kind of mechanism for maintaining the institution, and this inevitably ends up instantiating it's own version of Local Politics even it initially didn't have such a thing.
I don't have clear examples, no, but my guess is that there are, in fact, various small citadels throughout the world, but any citadel that's successful enough for both of us to have heard of it, was necessarily successful enough to attract attention from Powers That Be.
Wikipedia and Academic Science both come to mind as institutions that have their own politics, but which I (suspect), still do okay-ish at generating little pocket-citadels that succeed at focusing on whatever subset of truthseeking they've specialized in – individual departments, projects, or research groups. The trouble lies on the outside world distinguishing which pockets are generating "real truth" and which are not (because any institution that became known as a distinguishing tool would probably become corrupted)
Perhaps one core disagreement here is about which problem is 'actually impossible'?
- I say, you don't have the option of avoiding Local Politics, so the task is figuring out how to minimize the damage that local politics can do to epistemics (possibly aided by forking off private bubbles that are mostly inert to outsiders, thinking on their own, but reporting their findings periodically)
- You say... something like 'local politics is so toxic that the task must be figure out a way to avoid it'?
Does that sound right?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-11-29T01:41:36.621Z · LW(p) · GW(p)
Does that sound right?
Well, roughly. I don’t think it’s possible to entirely avoid “local politics”, in a totally literal sense, because any interaction of people within any group will end up being ‘politics’ in some sense.
But, certainly my view is closer to the latter than to the former, yes. Basically, it’s just what I said in this earlier comment [LW(p) · GW(p)]. To put it another way: if you already have “local politics”, you’re starting out with a disadvantage so crippling that there’s no point in even trying to build any “citadel of truth”.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-11-29T00:28:08.274Z · LW(p) · GW(p)
… to actually Use Truth to Do Desirable Things you need to actually to Focus On Truth For It’s Own Sake, and yes, this is a bit contradictory, and I’m not 100% sure how to resolve the contradiction
I do not think there is any way to resolve the contradiction. It seems clear to me that just as no man may serve two masters, no organization may serve two goals. “What you are willing to trade off, may end up traded away” [LW · GW]. And ultimately, you will sacrifice your pursuit of truth, if what you are actually pursuing is something else—because there will come a time when your actual goal turns out (in that situation, at that time, in that moment) to not be best served by pursuing Truth, for its own sake or otherwise.
And then your Citadel will not even be a Citadel of Truth And Something Else, but only a Citadel of Something Else, And Not Truth At All.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2019-12-01T18:57:30.516Z · LW(p) · GW(p)
I think there's still some highly technical [LW · GW] apparent-contradiction-resolution to do in the other direction: in a monist physical universe, you can't quite say, "only Truth matters, not consequences", because that just amounts to caring about the consequence of there existing a physical system that implements correct epistemology: the map is part of the territory.
To be clear, I think almost everyone who brings this up outside the context of AI design is being incredibly intellectually dishonest. ("It'd be irrational to say that—we'd lose funding! And if we lose funding, then we can't pursue Truth!") But I want to avoid falling into the trap of letting the forceful rhetoric I need to defend against bad-faith appeals-to-consequences, obscure my view of actually substantive philosophy problems.
↑ comment by Said Achmiz (SaidAchmiz) · 2019-11-29T00:15:03.326Z · LW(p) · GW(p)
Everything else you said aside…
But, a citadel that’s just focused on truth without paying attention to how that truth will actually get applied to anything, that doesn’t attempt to resolve the contradiction, doesn’t seem very interesting to me. That’s not the hard part
It is the hard part. It really, really is.
If you doubt this, witness the fact that we currently have no such institutions.
comment by juliawise · 2019-07-19T14:24:13.385Z · LW(p) · GW(p)
[speaking for myself, not for any organization]
If this is an allegory against appeals to consequences generally, well and good.
If there's some actual question about whether wrong cost effectiveness numbers are being promoted, could people please talk about those numbers specifically so we can all have a try at working out if that's really going on? E.g. this post made a similar claim to what's implied in this allegory, but it was helpful that it used concrete examples so people could work out whether they agreed (and, in that case, identify factual errors).
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-07-19T16:13:21.486Z · LW(p) · GW(p)
This is an allegory. While I didn't have any particular real-world example in mind, my dialogue-generation was influenced by a time I had seen appeals to consequences in EA; see EA Has A Lying Problem and this comment thread. So this was one of the more salient cases of a plausible moral case for shutting down true speech.
comment by Dagon · 2019-07-18T16:27:27.223Z · LW(p) · GW(p)
I think this is strawmanning the appeal to consequences argument, by mixing up private beliefs and public statements, and by ending with a pretty superficial agreement on rule-consequentialism without exploring how to pick which rules (among one for improving private beliefs, one for sharing relevant true information and one for suppressing harmful information) applies.
The participants never actually attempt to resolve the truth about puppies saved per dollar, calling the whole thing into question - both whether their agreement is real and whether it's the right thing. Many of these discussions should include a recitation of [ https://wiki.lesswrong.com/wiki/Litany_of_Tarski ], and a direct exploration whether it's beliefs (private) or publication (impacting presumed-less-rational agents) that is at issue.
In any case, appeals to consequences at the meta/rule level still HAS to be grounded in appeals to consequences at the actual object consequence level. A rule that has so many exceptions that it's mostly wrong is actively harmful. My objection to the objection to "appeal to consequences" is that the REAL objection is to bad epistemology of consequence prediction, not to the desire to predict consequences.
In a completely separate direction, consequences of speech acts in public/group settings are WAY more complicated than epistemic consequences of a truth-seeking discussion among a small group of fairly close rationalist-inclined friends. Both different rules/defaults/norms apply, and different calculations of consequences of specific speech actions are made.
All that said, I prefer norms that lean toward truth-telling and truth-seeking, and it makes me suspicious when that is at odds with consequences of speech acts. I have a higher standard of evidence for my consequence predictions for lying than I have for withholding relevant facts than I have for truth-telling.
comment by Richard_Kennaway · 2019-12-04T13:41:51.720Z · LW(p) · GW(p)
Carter is a mistake theorist, Quinn is a conflict theorist. At no point does Quinn ever talk about truth, or about anything, really. His words are weapons to achieve an end by whatever means possible. There is no more meaning in them than in a fist. Carter's meta-mistake is to believe that he is arguing with someone. Quinn is not arguing; he is in a fist fight.
comment by PeterMcCluskey · 2019-07-19T19:01:55.040Z · LW(p) · GW(p)
Quinn: “Hold it right there. Regardless of whether that’s true, it’s bad to say that.”
Carter: “That’s an appeal to consequences, well-known to be a logical fallacy.”
The link in Carter's statement leads to a page that clearly contradicts Carter's claim:
Replies from: jkaufmanIn logic, appeal to consequences refers only to arguments that assert a conclusion's truth value (true or false) without regard to the formal preservation of the truth from the premises; appeal to consequences does not refer to arguments that address a premise's consequential desirability (good or bad, or right or wrong) instead of its truth value.
↑ comment by jefftk (jkaufman) · 2019-07-19T20:37:25.267Z · LW(p) · GW(p)
It sounds to me like Jessica is using "appeal to consequences" expansively to include not just "X has bad consequences so you should not believe X" to "saying X has bad consequences so you should not say X"?
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-07-19T21:22:52.292Z · LW(p) · GW(p)
Yes. In practice, if people are discouraged from saying X on the basis that it might be bad to say it, then the discourse goes on believing not-X. So, the discourse itself makes an invalid step that's analogous to an appeal to consequences "if it's bad for us to think X is true then it's false".
Replies from: Dagon↑ comment by Dagon · 2019-07-19T21:38:46.184Z · LW(p) · GW(p)
Be careful with unstated assumptions about belief aggregation. "the discourse" doesn't have beliefs. People have beliefs, and discourse is one of the mechanisms for sharing and aligning those beliefs. It helps a lot to give names to people you're worried about, to make it super-clear whether you're talking about your beliefs, your current conversational partner's beliefs, or beliefs of other people who hear a summary from one of you.
If Alice discourages Bob from saying X, then Charlie might go on believing not-X. This is a very different concern from Bob being worried about believing a false not-X if not allowed to discuss the possibility. Both concerns are valid, IMO, but they have different thresholds of importance and different trade-offs to make in resolution..
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-07-19T21:42:18.051Z · LW(p) · GW(p)
In a math conversation, people are going to say and possibly write down a bunch of beliefs, and make arguments that some beliefs follow from each other. The conversation itself could be represented as a transcript of beliefs and arguments. The beliefs in this transcript are what I mean by "the discourse's beliefs".
comment by Evan_Gaensbauer · 2019-07-25T09:22:44.074Z · LW(p) · GW(p)
Summary: I'm aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into long, complicated theories. These theories don't appear to respond at all to the content of the disagreements with the public accusations of lying and dishonesty, and that's why these repeat accusations and justifications for them are poorly received.
These complicated theories don't have anything to do with what people actually want when public accusations of dishonesty or lying are being made, what is typically called 'hard' (e.g., robust, empirical, etc.) evidence. If you were to make narrow claims of dishonesty with more modest language, based on just the best evidence you have, and being willing to defend the claim based on that; instead of making broad claims of dishonesty with ambiguous language, based on complicated theories, they would be received better. That doesn't mean the theories of how dishonesty functions in communities, as an exploration of social epistemology, shouldn't be written. It's just that they do not come across as the most compelling evidence to substantiate public accusations of dishonesty.
For me it's never been so complicated as to require involving decision theory. It's as simple as some of the basic claims being made into much larger, more exaggerated or hyperbolic claims being a problem. They also come along with readers, presumably a general audience among the effective altruism or rationality communities, apparently needing to have prior knowledge of a bunch of things they may not be familiar with. They will only be able to parse the claims being made by reading a series of long, dense blog posts that don't really emphasize the thing these communities should be most concerned about.
Sometimes the claims being made are that Givewell is being dishonest, and sometimes they are something like because of this the entire effective altruism movement has been totally compromised, and is also incorrigibly dishonest. There is disagreement, sometimes disputing how the numbers were used in the counterpoint to Givewell; and some about the hyperbolic claims made that appear as though they're intended to smear more people than whoever at Givewell, or who else in the EA community, is responsible. It appears as though people like you or Ben don't sort through, try parsing, and working through these different disagreements or criticisms. It appears as though you just take all that at face value as confirmation the rest of the EA community doesn't want to hear the truth, and that people worship Givewell at the expense of any honesty, or something.
It's in my experience too, that with these discussions of complicated subjects that appear very truncated for those unfamiliar, that the instructions are just to go read some much larger body of writing or theory to understand why and how people deceiving themselves, each other, and the public in the ways you're claiming. This is often said as if it's completely reasonable to claim it's the responsibility of a bunch of people with other criticisms or disagreements with what you're saying to go read tons of other content, when you are calling people liars, instead of you being able to say what you're trying to say in a different way.
I'm not even saying that you shouldn't publicly accuse people of being liars if you really think they're lying. In cases of a belief that Givewell or other actors in effective altruism have failed to change their public messaging in the face of, by their own convictions, being correctly pointed out as them being wrong, then just say that. It's not necessary to claim that thus the entire effective altruism community are also dishonest. That is especially the case for members of the EA community who disagree with you, not because they dishonestly refused the facts they were confronted with, but because they were disputing the claims being made, and their interlocutor refused to engage, or deflected all kinds of disagreements.
I'm sure there are lots of responses to criticisms of EA which have been needlessly hostile. Yet reacting, and writing strings of posts as though, the whole body of responses were consistent in just being garbage, is just not accurate of the responses you and Ben have received. Again, if you want to write long essays about what rational implications how people react to public accusations of dishonesty has for social epistemology, that's fine. It would just suit most people better if that was done entirely separately from the accusations of dishonesty. If you're publicly accusing some people of being dishonest, just accuse those and only those people of being dishonest very specifically. Stop tarring so many other people with such a broad brush.
I haven't read your recent article accusing some actors in AI alignment of being liars. This dialogue seems like it is both about that, and a response to other examples. I'm mostly going off those other examples. If you want to say someone is being dishonest, just say that. Substantiate it with what the closest thing you have to hard or empirical evidence that some kind of dishonesty is going on. It's not going to work with an idiosyncratic theory of how what someone is saying meets some kind of technical definition of dishonesty that defies common sense. I'm very critical of a lot of things that happen in effective altruism myself. It's just that the way that you and Ben have gone about it is so poorly executed, and backfires so much, I don't think there is any chance of you resolving the problems you're trying to resolve with your typical approaches.
So, I've given up on keeping up with the articles you're writing criticizing things in effective altruism happening, at least on a regular basis. Sometimes others nudge me to look at them. I might get around to them eventually. It's honestly at the point, though, where the pattern I've learned to follow is to not being open-minded that the criticisms being made of effective altruism are worth taking seriously.
The problem I have isn't the problems being pointed out, or that different organizations are being criticized for their alleged mistakes. It's how the presentation of the problem, and the criticism being made, are often so convoluted I can't understand them, and that's before I can figure out if I agree or not. I find that I am generally more open-minded than most people in effective altruism to take seriously criticisms made of the community, or related organizations. Yet I've learned to suspend that for the criticisms you and Ben make, for the reasons I gave, because it's just not worth the time and effort to do so.
Replies from: jessica.liu.taylor, romeostevensit↑ comment by jessicata (jessica.liu.taylor) · 2019-07-25T09:38:29.623Z · LW(p) · GW(p)
This is a fictional dialogue demonstrating a meta-level point about how discourse works, and your comment is pretty off-topic. If you want to comment on my AI timelines post, do that (although you haven't read it so I don't even know which of my content you're trying to comment on).
Replies from: dxu↑ comment by dxu · 2019-07-25T17:24:50.585Z · LW(p) · GW(p)
This is a fictional dialogue demonstrating a meta-level point about how discourse works, and your comment is pretty off-topic.
I think that if a given "meta-level point" has obvious ties to existing object-level discussions, then attempting to suppress the object-level points when they're raised in response is pretty disingenuous. (What I would actually prefer is for the person making the meta-level point to be the same person pointing out the object-level connection, complete with "and here is why I feel this meta-level point is relevant to the object level". If the original poster doesn't do that, then it does indeed make comments on the object-level issues seem "off-topic", a fact which ought to be laid at the feet of the original poster for not making the connection explicit, rather than at the feet of the commenter, who correctly perceived the implications.)
Now, perhaps it's the case that your post actually had nothing to do with the conversations surrounding EA or whatever. (I find this improbable, but that's neither here nor there.) If so, then you as a writer ought to have picked a different example, one with fewer resemblances to the ongoing discussion. (The example Jeff gave in his top-level comment, for example, is not only clearer and more effective at conveying your "meta-level point", but also bears significantly less resemblance to the controversy around EA.) The fact that the example you chose so obviously references existing discussions that multiple commenters pointed it out is evidence that either (a) you intended for that to happen, or (b) you really didn't put a lot of thought into picking a good example.
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-07-25T17:39:38.202Z · LW(p) · GW(p)
I shouldn't have to argue about the object-level political consequences of 1+4=5 in a post arguing exactly that. This is the analytic synthetic distinction / logical uncertainty / etc.
Yes, I could have picked a better less political example, as recommended in Politics is the Mind Killer. In retrospect, that would have caused less confusion.
Anyway, Evan has the option of commenting on my AI timelines post, open thread, top level post, shortform, etc.
↑ comment by romeostevensit · 2019-07-30T14:30:34.761Z · LW(p) · GW(p)
In metaphysical conflicts people don't win by coming up with the best evidence, they win by controlling what gets counted as evidence. By default, memeplexes gain stability by creating an environment in which evidence against them can't be taken seriously. Arguments that EA has failed to actually measure the things it claims are worth measuring should be taken very seriously on their face, since that is core to the claims of moral obligation (which is itself a bad frame, but less serious.)
comment by Wei Dai (Wei_Dai) · 2019-07-21T00:13:44.183Z · LW(p) · GW(p)
So, in summary: if we’re going to have suppression of some facts being said out loud, we should have that through either clear norms designed with consequences (including consequences for epistemology) in mind, or individuals deciding not to say things, but otherwise our norms should be protecting true speech, and outlawing appeals to consequences.
-
Are you happy with a LW with multiple norm sets, where this is one of the norm sets you can choose?
-
What's your plan if communities or sub-communities with these norms don't draw enough participants to become or stay viable? (One could argue that's what happened to LW1, at least in part. What do you think?)
↑ comment by jessicata (jessica.liu.taylor) · 2019-07-21T09:56:58.684Z · LW(p) · GW(p)
-
Yes.
-
Think about why that is and adjust strategy and norms correspondingly. (Sorry that's underspecified, but it actually depends on the reasons). I don't know what happened to LW1, but it did have pretty high intellectual generativity for a while.
↑ comment by Ruby · 2019-07-21T20:32:41.813Z · LW(p) · GW(p)
I don't know what happened to LW1, but it did have pretty high intellectual generativity for a while.
I think Wei Dai said that too elsewhere. When each of you says intellectual generativity, do you the site a whole (post + discussions), or specifically that the discussions in comments were more generative?
Other question is if you think you can quantitatively state some factor by which LW1 was more generative than LW2? If it was only 2x, that would suggest less generativity per person/comment than current LW, since old LW had much more than double the number of users and comments. If it was 10x, then LW1 was qualitatively better in some way.
(I'd expect the output to be a right-tailed distribution over individuals. LW2 could be less generative than LW1 because the top N users which produced 80% of the value left, so it's not really about the raw number of users/comments.
The most interesting scenario would be if it were all the same people, but they were being less generative.)
Replies from: jessica.liu.taylor↑ comment by jessicata (jessica.liu.taylor) · 2019-07-21T21:43:53.577Z · LW(p) · GW(p)
The site as a whole.
I wasn't around in early LW, so this is hard for me to estimate. My very, very rough guess is 5x. (Note, IMO the recent good content is disproportionately written by people willing to talk about adversarial optimization patterns in a somewhat-forceful way despite pressures to be diplomatic)
Replies from: Dagon, Benquo, Ruby↑ comment by Dagon · 2019-07-25T18:32:00.668Z · LW(p) · GW(p)
I have noted this as well, and I find it worrisome. Many recent interesting conversations are more about social and interpersonal communication / alignment than about personal or theoretical rationality and decision-making. I like it because they are actually interesting topics. I worry that they're crowding out or hiding a painful decline of more core rationality discussions. I don't worry that they're too close to politics (I think they are close to politics, but are narrow enough that they seem to fall prey to the standard problems more because they're trying to skate around the issue rather than being direct).
I had not framed them as "adversarial optimization patterns", mostly because they seriously bury that lede. A direct acknowledgement would be useful that almost all groups of more than one (and in some models, including an individual human) contain multiple simultaneous games, with very different payout matrices and equilibria which impact other games. Values start out divergent, and this can't be assumed away for any part of reality.
↑ comment by Ruby · 2019-07-21T23:09:01.753Z · LW(p) · GW(p)
Yeah, granted that it's going to be rough.
5x seems consistent with the raw activity numbers [LW(p) · GW(p)] though. Eyeballing it, seems like 4x more active in terms of comments and commenters. Number of posts is pretty close.
Replies from: Raemoncomment by cousin_it · 2019-07-18T12:42:11.728Z · LW(p) · GW(p)
So if evidence against X is being suppressed, then people's belief in X is unreliable, so it can't justify suppressing evidence against X. That's a great argument for free speech, thanks! Do you know if it's been stated before?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2019-07-18T13:19:13.158Z · LW(p) · GW(p)
This doesn’t seem quite right to me.
Consider this example:
“Evidence against the Holocaust is being suppressed[1]. Therefore people’s belief in the Holocaust is unreliable. And so we cannot justify suppressing Holocaust denial by appealing to the (alleged) fact of the Holocaust having occurred.”
Something is wrong here, it seems to me. Not with the conclusion, mind you, the policy proposal, as it were; that part is all right. But the logic feels odd, don’t you think?
I don’t have a full account, yet, of cases like this, but it seems to me that some of the relevant considerations are as follows. Firstly, we previously undertook a comprehensive project (or multiple such) to determine the truth of the matter, which operated under no such restrictions as we now defend, and came to conclusions which cannot be denied. Secondly, we have people whose belief in the facts of the matter come from personal experience, and are not at all contingent on (nor even alterable by) any evidence we may or may not now present. Thirdly, as the question is one of historical fact, no new evidence may be generated; previously unknown but existing evidence may be uncovered, or currently known evidence may be shown to be misleading or fraudulent, but there is (it seems) no question of experimentation, or similar de novo generation of data.[2]
Now, I do not say that these considerations absolutely refute the given logic. But it seems to me that they seriously undermine its force. Here is a situation where (or so it seems!) we may be quite certain of our conclusions, have no reason to expect anything more than the most infinitesimal (for practical purposes, nil) chance for our understanding of the facts to ever change, and thus may suppress the presentation of any alleged evidence against our current view, while maintaining the rational belief that we are not thereby risking some distortion of our grasp of the facts.
I can think of certain counterarguments (and in any case I do not endorse the conclusion suggested by this line of argument, for somewhat-unrelated reasons), but I am curious to see what you make of this.
Full disclosure: I have quite a few family members who survived the Holocaust (and many more, of course, who did not). I am also strongly opposed to laws against Holocaust denial.
If you live in parts of Europe, for example, where there are criminal penalties for Holocaust denial. ↩︎
Barring, perhaps, the invention of time travel, inter-temporal observation, or similar fantastic technologies. ↩︎
↑ comment by cousin_it · 2019-07-18T14:03:05.712Z · LW(p) · GW(p)
Yes, having strong unfiltered evidence for X can justify suppressing evidence against X. But if suppression is already in effect, and someone doesn't already have unfiltered evidence, I'm not sure where they'd get any. So the share of voters who can justify suppression will decrease over time.
comment by Slider · 2019-07-20T04:03:49.534Z · LW(p) · GW(p)
"If we want to do those things, we have to do them by getting to the truth"
This seems fair if it focuses on the rationalist strategy on trying to interface with the world and how truth is essential. However it's probably not literally true in that there are probably Dark Arts and such which provide those spesific sought goods with outrageous prices. "Have" in this context means "within our options we have created for ourselfs" and not "it is not possible to produce the effect via other means"
Carter states that the norm for discussion norms is whether they obscure or reveal the truth. But then on radical honesty it is not counted in radical honestys favour that truth is more likely to come out and some unspesified "is bad for people" is found to be sufficient reason to abandon it. It's is not clear whethe it means "epistemologically bad" or in the ordinary sense "bad consequences". This ends up being a total cop-out in my mind how the known downsides for radical honesty overcome his stated principle obscuring is the standard. I think this reveals that he is a hypocrite about finding good only via truth as here it is not applied but a common sense knowledge about social harshness overrides it. The door would be open that some other metric could have norm-level good impact that would outwieght the epistemological impact.
It would also seem very abusable if all truths should always be evaluated on their object level. In law there is a principle of the poisonous tree that evidence obtained via illegal means can't be used to establish quilt. If a court would be forced to take into account all true facts cops would be tempted to commit small crimes to get evidence for big crimes. A court can have divided loyalties as it can be reasoned that fairness is not the same as truth and procedure that ensures fairness but is truth diminishing can be acceptable.
The opening fact statement did contain information that is conductive to credibility evaluation (who said) in additon to the puppy number. But I could very well imagine that this "foundation" would be insufficently firm to let the discussion fly. If for example I said "I heard the wind whisper to me that so and so organization saved X puppies last year" a natural curiocity would be "the wind whispered you?" and this would be processed before processing of X would start and it would be possible to have a conclusion of "I am not hearing about your hallucinations one bit more". This foundation building is a natural place to place other checks but even if concerned with truth there must be some reason why the information is relevant. Before you take too seriously what is written on a paper you must to a degree believe that the paper existed. But there is no door to pure hypothethicals. You don't get to submit X for consideration if you don't have a slight degree of justified belief in it. And maybe some base level of indication is given by "because I am saying so" Afterall what are observations but stubbornly correlated hallucinations? But by your word alone it's a mere claim.