A reply to Agnes Callard
post by Vaniver · 2020-06-28T03:25:27.378Z · LW · GW · 36 commentsContents
36 comments
Agnes Callard says on Twitter:
Sincerely can't tell which threatens culture of free thought & expression more:
@nytimes de-anonymizing & destroying a (rightly) treasured blog for no good reason
@nytimes increasingly allowing external mobs (w/powerful members) influence over what it publishes
I believe that the arguments in this op-ed--about why philosophers shouldn't use petitions to adjudicate their disputes--also apply to those non-philosophers who, for independent reasons, are committed to the power of public reason. https://t.co/elZkZgBYPD?amp=1
[The link is to a NYT op-ed she wrote; which is now available for free.]
A friend linked to this on Facebook, and I replied with a Bible verse:
Render unto Caesar the things that are Caesar's, and unto God the things that are God's.
But, in the spirit of public reason, let me explain my meaning.
Different people present different interfaces to the public. Philosophers want to receive and know how to handle reasoned arguments; politicians want to receive and know how to handle votes. Present a politician with a philosophical argument, and they might think it's very nice, but they won't consider it as relevant to their duties as a politican as polling data. "The voters aren't choosing me to think," they might say; "they're choosing me to represent their will."
This isn't to single out politicians as 'non-philosophers'; businesspeople have their own interface, where they want to receive and know how to handle money, and athletes have their own interface, and musicians, and so on and so on. This is part of the society-wide specialization of labor, and is overall a good thing.
Callard argues that philosophers shouldn't sign petitions, as philosophers--in my framing, that it corrupts the interface of 'philosophers' for them to both emit reasoned arguments and petitions, which are little more than counts of bodies and clout. If they want to sign petitions, or vote, or fight in wars on their own time, that's fine; but those political behaviors are not the business of philosophy, and they should do them as people instead of as philosophers.
Overall, this argument seems right to me, except I think it also makes sense for individuals, even those personally committed to public reason, to honor the other disciplines by engaging with them the way they want to be engaged with, so long as it is consistent with one's conscience. If you want, say, a politician to take seriously Aliens and Citizens, you need to assemble a voting bloc that is in favor of open borders, rather than simply sending them copies of the paper. If you want a businessperson to produce anti-malaria nets for people in need, the thing to do is assemble a pile of money to trade them for the nets.
And so the question becomes: when an editor at the New York Times makes a decision that seems wrong-headed and cruel, what interface do they present to the world, and how should we make use of it?
In this particular case, the editor in question is not a philosopher, and to the best of my knowledge hasn't elected the interface of philosophy. [Scott was simply informed of the editor's decision, not her reasons for the decision, which we can only imagine. If it hinges on a reasoned defense of pseudonyms, I am happy to provide one; it if hinges on proof that pseudonyms are important to our culture, the petition seems like the best way to provide that.]
Callard, in the replies, tweets:
Good thought. I think if there were some way for the document (which wouldn't then be a petition) to convey: "this is what we think but if, after careful deliberation, you sill believe that you are doing the right thing, we will support your decision"-I might feel differently.
I have employed this strategy in the past, when I felt someone else owned a decision or I overall trusted their judgment. But it feels like this is a tool with a narrow use, rather than one that applies broadly. I wouldn't say to Putin, "I personally think it is wrong to murder journalists, but if after careful deliberation, you still believe that you are doing the right thing, I will support your decision." His belief that he is doing the right thing (if he even casts the decision in those terms) is not a crux for me, and would not change my views.
Similarly, I don't see a reason yet to trust that the editor thinking Scott's birth name should be published, instead of referring to him with his pseudonym as the NYT has done many times before for others in similar situations, should update any of my beliefs on the value of pseudonymity, instead of simply reflecting on their callousness.
36 comments
Comments sorted by top scores.
comment by Agnes Callard · 2020-06-28T16:33:52.446Z · LW(p) · GW(p)
Hi, thanks for writing this, someone linked me to it on twitter and I wrote a reply there: https://twitter.com/AgnesCallard/status/1277274771735089152?s=20
Replies from: lionhearted, Vaniver, Agnes Callard, Zvi↑ comment by lionhearted (Sebastian Marshall) (lionhearted) · 2020-06-30T16:53:33.767Z · LW(p) · GW(p)
Hi Agnes, I just wanted to say — much respect and regards for logging on to discuss and debate your views.
Regardless if we agree or not (personally, I'm in partial agreement with you) — regardless, if more people would create accounts and engage thoughtfully in different spaces after sharing a viewpoint, the world would be a much better place.
Salutations and welcome.
↑ comment by Vaniver · 2020-06-28T19:03:28.624Z · LW(p) · GW(p)
I see the central issue--also raised in replies to my tweet--as: if you believe someone's arguing in bad faith, isn't it ok to engage non-rationally w them?
I agree the question "isn't it okay to engage non-rationally w them?" is the central question. I disagree on the first half, though; my main question is: what makes you think the NYT is arguing?
If, say, you put forward your argument for why petitions are bad, and it was broadly ignored, that would be bad; if there were arguments against pseudonyms, and we crushed them rather than responding to them, that would be bad. But this is a place where someone is exercising arbitrary judgment, and presenting petitions is an old and peaceful way of influencing the arbitrary exercise of power.
I think that when Plato goes to the agora and sees someone selling fruit for drachmae, he does not think "it would be unreasonable to settle an argument by paying my interlocutor; is it ok to engage non-rationally with the merchant?" I think when Plato goes to the wrestling ring, he does not think "physical strength does not determine correctness of arguments, is it ok to engage non-rationally with my opponent?"
Now, probably at one point he's engaged rationally with the questions of whether and how to engage with commerce and sport, and that seems good to me. But the Plato who tries to wrestle with arguments instead of with his body is confused, not heroic.
Replies from: Agnes Callard, Slider↑ comment by Agnes Callard · 2020-06-29T14:00:34.571Z · LW(p) · GW(p)
I think if you disagree with what someone thinks, or plans to do, the rational response is an argument to persuade them that they are wrong. (This is true irrespectively of whether they were, themselves, arguing, and it goes for the fruit-seller, the wrestler, etc. too.)
Of course if what you want is to acquire fruit from someone or defeat them in wrestling--as opposed to showing them that they are wrong--then you should not use argument, but money/force.
This has led me to ponder the following question:
What is the difference between trying to persuade someone that something is the right or best thing for them to do, and trying to incentivize them to do that thing (by payment, or threats, etc.)?
I do believe there is a difference, but I do not have an adequate account of what the difference is.
Thanks for bringing this problem to my attention.
Replies from: Vaniver, Zvi, Raemon↑ comment by Vaniver · 2020-06-30T23:44:37.556Z · LW(p) · GW(p)
Thanks for bringing this problem to my attention.
You're welcome, and I'm curious to see what you end up thinking here.
I think if you disagree with what someone thinks, or plans to do, the rational response is an argument to persuade them that they are wrong. (This is true irrespectively of whether they were, themselves, arguing, and it goes for the fruit-seller, the wrestler, etc. too.)
As pointed out by Raemon in a sibling comment, here I think we want to start using a more precise word [LW · GW] than "rational." [Up until this point, I think I've been using "engage rationally" in a 'standard' way instead of in a 'Less Wrong specific way'.]
I'm going to say the 'argumentative' response is an 'argument to persuade them that they are wrong', and agree that purely argumentative responses are important for communicative rationality. The thing that's good about argumentative responses (as opposed to, say, purely persuasive ones) is that they attempt to be weaker when the claims they favor are not true than when they are true; and this helps us sort our beliefs and end up with truer ones.
I think for many disagreements, however, I want to do a thing that doesn't quite feel like argumentation; I want to appeal to reality. This involves two steps: first, an 'argument' over what observations imply about our beliefs, and second, an observation of reality that then shifts our beliefs. The first is an argument, and we do actually have to agree on the relationship between observations and beliefs for the second step to do anything useful. This doesn't help us establish logical truths, or things that would be true in any world, except indirectly; what it does help us do is establish empirical truths, or things that are true in our world (but could be false in others). Imagine a portal across universes that allows us to communicate with aliens who live under different physics than we do; it would be a tremendous surprise for our mathematicians and their mathematicians to disagree, whereas our chemists and their chemists disagreeing wouldn't be surprising at all.
I think that the wrestling match falls into this category; if a rival claims "I could out-wrestle Plato", then while Plato could respond with theories of wrestling and other logic, the quickest path to truth seems to be Plato responding with "let's settle that question in the ring." There's the same two-part structure of "agree on what test bears on the question" and then "actually running the test." I don't think buying fruit falls into this category. ["Quickest path to truth" might not be the right criterion, here, but it feels likely to be close.]
Continuing to flesh out this view, besides "appeal to logic" and "appeal to reality" there's something like "assertion of influence." This seems like the category that buying fruit falls into; I have some ability to change the external world to be more to my liking, and I trade some of that ability to the merchant for fruit. There seem to be ethical ways to do this (like free commerce) and unethical ways to do this (like stealing), and in particular there seem to be many ways for assertion of influence to choke off good things.
I think 'ethical' and 'unethical' look more like 'congruent with values X' or 'incongruent with values X' than it does like 'logically valid' or 'logically invalid'. [In this way, it more resembles the category of empirical truth, in which things are 'congruent with world X' or 'incongruent with world X' as opposed to 'congruent with all possible worlds' or not.]
And so we end up with questions that look like "how do we judge what influence is congruent with our values, and what influence in incongruent with our values?", and further questions upstream like "what do our meta-values imply our values should be?", and so on.
[There's much more to say here, but I think I'll leave it at this for now.]
↑ comment by Zvi · 2020-06-29T23:15:10.777Z · LW(p) · GW(p)
Let's give that account a shot.
This seems to me like it's "action to improve accuracy of target's map" vs. "action on both map and territory" with the strange case being "action to decrease the accuracy of the target's map".
An agent/person is considering whether to take some action. That action will have consequences, and various justifications other than its consequences for taking or not taking the action. The agent/person also has a map, which includes both what they believe those consequences would be, and what other reasons exist for taking or not taking that action and how much weight each of these things should carry.
Suppose we wish to prevent this action from taking place. We have two basic approaches here:
We can act upon their map alone, or we can act upon the territory and use this to change their map.
When we engage in a philosophical or other argument with them, we're not trying to change what effect the action will have, or change any other considerations. Instead, we are trying to update their map such that they no longer consider the action worthwhile. Perhaps we can get them to adopt new philosophical principles that have this effect. Perhaps we are doing something less abstract, and convincing them that their map of the situation and what actions will cause what consequences is flawed.
When we instead try to incentivize them, we do this by changing the consequences of the action, or altering other considerations (e.g. if someone cares about doing that which is endorsed by some authority, moral or otherwise, without regard to their knowledge or future actions, we could still use that as an incentive). We act upon the territory. We change circumstances such that taking action becomes more expensive, or has undesired consequences. This can include the consequence that we will take actions in response. Alternatively, we can improve the results of not taking action (e.g. bribe them, or promise something, or prevent the bad consequences of inaction by solving the underlying problem, etc etc).
This seems mostly like a clear distinction to me, except there's a tricky situation where you're acting upon their map in a way that makes it less rather than more accurate. For example, you might threaten something, but not intend to carry out the threat. Or you might lie about the situation's details. Did you act to persuade them that the action is not desirable, or did you change their incentives?
One possibility that I'm drawn to is to say there are three things here - persuasion, deception and incentivization. And of course many attempts combine two or three of these.
On the question of what the right action is:
Is the/a petition persuasion, deception or incentivization? In this case, it seems clearly to be persuasion. The mission is not to generate bad consequences and then inform the target of this threat. There are already bad consequences, both to the target and in general, that we are making more visible to the target. Whether or not we should also be doing incentivization in other ways seems like a distinct question. I've chosen yes to at least some extent.
Sometimes someone will want to do or not something, and we'll think they're making a mistake from their perspective, and persuasion should be possible. Other times, we don't think they're making a mistake from their perspective, but still prefer that they change their behavior. We all incentivize people from time to time. It seems quite perverse to think we shouldn't do this in cases where the person is making a mistake! It seems even crazier to not do this when we see someone doing what we believe is the wrong thing because they see incentives (e.g. money or the ability to get clicks) that are compromising what would otherwise be good ethics.
I would strongly agree with your assertion that philosophical questions - and indeed many other questions - should not be settled via majority vote. But does that mean that when the vote is held one must abstain in protest? Do you also believe philosophers should not vote in elections? It's not like one can issue disclaimers there.
I also am confused by the assertion that the petition would benefit in some way from a "if after consideration you decide we're wrong, we'll support you" clause. It does not seem necessary or wise, before one attempts to persuade another that they are wrong, to agree to support their conclusion if they engage in careful consideration. Even when incentives are aligned.
Replies from: Vaniver↑ comment by Vaniver · 2020-06-29T23:35:13.372Z · LW(p) · GW(p)
A quick comment on just this part:
I also am confused by the assertion that the petition would benefit in some way from a "if after consideration you decide we're wrong, we'll support you" clause. It does not seem necessary or wise, before one attempts to persuade another that they are wrong, to agree to support their conclusion if they engage in careful consideration. Even when incentives are aligned.
I think what this does is separate out the "I think you should update your map" message and the "I am shifting your incentives" message. If I think that someone would prefer the strawberry ice cream to the vanilla ice cream, I can simply offer that information to them, or I can advise them to get the strawberry, and make it a test of our friendship whether they follow my advice, rewarding them if they try it and like it, and punishing them in all other cases.
In cases where you simply want to offer information, and not put any 'undue' pressure on their decision-making, it seems good to be able to flag that. The broader question is something like "what pressures are undue, and why?"; you could imagine that there are cases where I want to shift the incentives of others, like telling a would-be bicycle thief that I think they would prefer not stealing the bike to stealing it, and part of that is because I would take actions against them if they did.
↑ comment by Raemon · 2020-06-29T15:51:23.132Z · LW(p) · GW(p)
Probably worth noting that folk on LessWrong may be using the word rationality different than the way it sounds like you're using the word. (This is fine, but it means we need to be careful that we're understanding each other right)
The post What Do We Mean By Rationality is a bit old but still roughly captures what most LW-folk mean by the word:
1. Epistemic rationality: systematically improving the accuracy of your beliefs.
2. Instrumental rationality: systematically achieving your values.
The first concept is simple enough. When you open your eyes and look at the room around you, you’ll locate your laptop in relation to the table, and you’ll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there’s a bookcase where no bookcase exists, and when you go over to get a book, you’ll be disappointed.
This is what it’s like to have a false belief, a map of the world that doesn’t correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.1
Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”,
So rationality is about forming true beliefs and making decisions that help you win.
I'm not sure what your conception of rationality is. I'm somewhat interested, but I think it might be better to just cut closer to the issue: why is good to rely on reasoned arguments rather than petitions?
Replies from: Agnes Callard↑ comment by Agnes Callard · 2020-06-29T16:07:56.121Z · LW(p) · GW(p)
Yes, good point, thanks for the request for clarification.
I think there is a third kind of rationality, called "communicative rationality"
See this tweet: https://twitter.com/AgnesCallard/status/1276531044024451073?s=20
(and also my replies to questions therein)
I think there is such a thing as "communicating well" where "well" picks out internal norms of communication (not, e.g. in such a way as to conduce instrumentally to my interests or to my having truer beliefs--bc it could happen that lying to you serves either of those ends) and that is what I mean by "communicating rationally"
The goals of such communication are what I called (in the tweet thread) "bidirectional likemindedness"--that we think the same thing, but not bc it's determined in advance that you will think what I (independently) thought or that I will think what you (independently) thought.
Replies from: Raemon↑ comment by Raemon · 2020-06-29T16:25:34.931Z · LW(p) · GW(p)
So I do think it makes sense to have philosopher societies where the focus is on sharing information in such a way that we jointly converge on the truth (I'm not sure if this is quite the same thing you're getting at with communicative rationality.). And I think there is benefit to trying to get broader society to adopt more truthseeking styles of communication, which includes more reasoned arguments on the margin.
But, this doesn't imply that it's always the right thing to do, when interacting with people who don't share your truthseeking principles. (for extreme example, I wouldn't try to give reasoned arguments to someone attacking me on the street)
I have some sense of why communicative rationality is important to you, but not why it should be (overwhelmingly) important to me.
I think there is sometimes benefit to people standing by their principles, to get society to change around them. (i.e. you can be a hero of communicative rationality, maybe even trying to make reasoned arguments to an attacker on the street, to highlight that clear communication is a cause worth dying for). But, this is a supererogatory thing. I wouldn't want everyone who was interested in philosophy to feel like interest-in-philosophy meant giving up their ability to defend themselves, or give up the ability to communicate in ways that other cultures understand or respect.
That would probably result in fewer people being willing to incorporate philosophy into their life.
My own conception of rationality (note: Vaniver may or may not endorse this) is to be a robust agent – someone who reliably makes good decisions in a variety of circumstances, regardless of how other agents are interacting with me and how the environment might change. This includes clear communication, but also includes knowing how to defend yourself, and credibly communicating when you will defend yourself, and how, so that people can coordinate with you.
My conception of "rationalist hero" is someone who understands when it is the right time to defend "communication via reasoned arguments", and when is the right to defend other foundational norms (via incentives or whatnot)
I think this is legitimately tricky (part of being a rationalist hero in my book is having the good judgment to know the difference, and it can be hard sometimes). But, right now it seems to me that it's more important to be incentivizing the Times to not de-anonymize people, rather than to focus on persuading them that it is wrong to do so using reasoned arguments.
↑ comment by Slider · 2020-06-28T19:34:49.938Z · LW(p) · GW(p)
I can see a lot of good philosophy in "fruit seller doesn't pose an argument, but an option for exchange" and in the trade value of the item not speaking to the items general worth.
In the wrestling ring it coudl be tempting if the reigning champion had stark wrestling opinions to assume he is correct and he might have the attitude that anybody that disagrees with him should try to beat him. Recognising that appeal to the stick is fallacious is beneficial in that you can realise why he could stay wrong if his is is incorrect about where his wrestling prowness comes from.
The greek were known to be annoying and bothering others by challenging their logic and concepts. Rational wrestling might not be the be all end all of wrestling but rationality does wrestling good.
↑ comment by Agnes Callard · 2020-06-28T18:16:08.747Z · LW(p) · GW(p)
Also, if you want to read the NYT oped (sorry abt paywall), I've put the text here:
https://twitter.com/AgnesCallard/status/1277304501133873152?s=20
↑ comment by Zvi · 2020-06-29T23:33:11.517Z · LW(p) · GW(p)
This reply seems to be making two arguments:
1. That there is value in having philosophical 'heroes' who only make arguments on philosophical grounds and avoid anything that might look like arguing from authority or enabling of mobs.
2. That a danger is that NYT may lose the autonomy it needs to pursue truth.
I think I'm basically fine with #1, provided those arguments on philosophical grounds get made. Which seems in this case to have happened - you've clearly done more to help than you would have by only singing a petition.
I agree that #2 is a danger, but don't see how the petition makes the danger worse. NYT already has lots of incentives it responds to beyond 'seeking truth' no matter how charitable we want to be, and all the petition does is alert them to some of their incentives.
comment by orthonormal · 2020-06-28T17:21:14.546Z · LW(p) · GW(p)
I forget who responded with the kernel of this argument, but it wasn't mine:
Saying the incentives should be different doesn't mean pretending they are different. In an ideal world, news organizations would have good standards and would not give in to external pressure on those standards. In our world, news organizations have some bad standards and give in to external pressure from time to time.
Reaching a better world has to come from making de-escalation treaties or changing the overall incentives. Unilaterally disarming (by refusing to even sign petitions) has the completely predictable consequence that the NYT will compromise their standards in directions that we dislike, because the pressure would be high in each direction but ours.
comment by Purplehermann · 2020-06-28T17:22:34.853Z · LW(p) · GW(p)
"Allowing mobs influence..." If the nyt had decided to publish an article advocating killing blacks for talking in public, I doubt anyone would have an issue with an online mob pressuring the nyt to retract the article.
Certainly Callard would not be questioning whether the article was worse or allowing mobs to influence the nyt to take the article down was worse.
Not all 'mobs' are created equal. Neither are all attempts at influence. The influence being exerted here is purely benign - this is not an attempt to influence the culture war or get soneone fired, all that is asked is that someone be allowed keep his pseudonymity, with good reason.
Edit: After reading the full text of Callard's OP I don't think what I wrote above addresses their full position.
As others have noted, this is not an instance of philosphers taking off the philosophy hat when dealing with other philosphers. The NYT isn't a group of philosphers, it is a business.
This business is acting in a harmful way, either because it is acting as a bureaucracy (reasoning will not make red tape go away),or in hostile fashion (or a higher up decided on this action just because I suppose).
None of these possibilities lend themselves to looking at this as a simple mistake of ethics (unless you frame it as a mistake of normative ethics/bottom line ethics, in which case a petition is an actual argument), where you can discuss and reach a conclusion.
In regards to philosophy needing to come into play in real life too - philosophy needs to recognize that conflict exists in real life.
If a man is coming to kill someone you know, the proper response should be reached through mistake theory internally, but stopping the aggressor physically should not be out of bounds when deciding on a response. Mistake theory needs to be aware of conflict theory. (Of course, if the man is a mistake theorist in regards to the one who woud stop him and would like to discuss before either takes action, one would be remiss not to)
comment by ChristianKl · 2020-06-28T18:27:30.144Z · LW(p) · GW(p)
The NYTimes spread lies to the American people about WMDs that supported the mobilisation towards war. Afterwards it faced public criticism and as a result it had reforms to the way it publishes the news. They created the office of the public editor to increase their ethical standards.
Was it bad that public pressure changed their news-making in that regard? It seems to me obvious that it's good when public critcism leads a news outlet to increase their ethical standards.
Whenever an organization has the choice to engage in an action that brings them short term economic benefits (clicks) at the expense of social value for broader society, it's important when public pressure can change the organization to engage in more ethical behavior.
From Agnes Twitter:
I believe that the arguments in this op-ed--about why philosophers shouldn't use petitions to adjudicate their disputes--also apply to those non-philosophers who, for independent reasons, are committed to the power of public reason.
The kind of means you use to adjudicate disputes depends on both parties.
If I would believe that the NYTimes is commited to the power of public reason, then I would grant that reasoned argument is a better vehicle then a petition. I do however believe that the NYTimes cares more about making money for it's shareholders then it cares for public reason.
If anybody wants to make an argument that public reason is more important for the NYTimes then profits, I would be happy to see examples where the NYTimes decided to engage in actions that were neither good for clicks nor their reputation and that can be explained by caring for public reason.
Lastly, I think that philosophers as a class should exert more public pressure on institutions that engage in behavior that violates the knowledge that the philosophers gathered. Philosophers should do petitions about how the ontological assumptions of the DSM-5 are appalling.
comment by Viliam · 2020-06-28T19:08:29.161Z · LW(p) · GW(p)
And so the question becomes: when an editor at the New York Times makes a decision that seems wrong-headed and cruel, what interface do they present to the world, and how should we make use of it?
I think the interface involves pageviews and subscriptions.
With subscriptions, the right strategy would be to threaten to unsubscribe if NYT proceeds with the story. I heard that the process of unsubscription is quite complicated, so publishing a step-by-step manual would be a nice threat.
With pageviews, it is more complicated. The strategy "let's make the entire internet angry about doxing Scott" could easily backfire. NYT could simply publish a story without doxing Scott, which everyone would obviously carefully read... then another unexpected story about Scott, again without doxing him, again many readers... and again... and again... and when the stories would no longer get enough pageviews, then they would publish another story where they would dox Scott, so again tons of views... and afterwards some meta-stories like "why we believe it was ethically correct to dox Scott"... heck, even stories "reader's opinion: why it was wrong to dox Scott", the opinion doesn't matter, there are pageviews either way... etc. This is why online advertising is such a force of evil. It is not obvious to me whether losses from subscriptions would outweigh the gains from views.
Replies from: DanielFilan↑ comment by DanielFilan · 2020-06-30T01:17:13.268Z · LW(p) · GW(p)
To be fair, they also have a feedback page where you can type stuff.
To me, this seems like a strong counterargument - I'd think that petitions are an interface to the NYT the same way that they are to me, that is to say an unwelcome one.
Replies from: Vaniver↑ comment by Vaniver · 2020-06-30T23:45:56.423Z · LW(p) · GW(p)
Also relevant: The Asshole Filter.
comment by Rafael Harth (sil-ver) · 2020-06-28T10:33:29.545Z · LW(p) · GW(p)
To me, both the original tweet and your reply seem to miss the point entirely. I didn't sign this petition out of some philosophical position on what petitions should or shouldn't be used for. I did it because I see something very harmful happening and think this is a way to prevent it.
Of course, anyone is free to look at this and try to judge it by abstracting away details and looking at the underlying principle. Since the tweet does that, it's fine to make a counter-argument by doing the same. But it doesn't mean anything to me, and I doubt that most people who signed the petition can honestly say that it has much to do with why they signed it.
Replies from: Vaniver, Kenny↑ comment by Vaniver · 2020-06-28T19:09:59.179Z · LW(p) · GW(p)
To me, both the original tweet and your reply seem to miss the point entirely. I didn't sign this petition out of some philosophical position on what petitions should or shouldn't be used for. I did it because I see something very harmful happening and think this is a way to prevent it.
I think it is very important [LW · GW] to have things that you will not do, even if they are effective at achieving your immediate goals. That is, I think you do have a philosophical position here, it's just a shallow one.
I disagree with the position Callard has staked out that petitions are inconsistent with being a philosophical hero, but for reasons that presumably we could converge on; hence the reply, and a continuing conversation in the comments.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-06-29T11:32:19.213Z · LW(p) · GW(p)
I think it is very important [LW · GW] to have things that you will not do, even if they are effective at achieving your immediate goals. That is, I think you do have a philosophical position here, it's just a shallow one.
I think the crux may be that I don't agree with the claim that you ought to have rules separate form an expected utility calculation. (I'm familiar with this position from Eliezer but it's never made sense to me.) For the "should-we-lie-about-the-singularity" example, I think that adding a justified amount of uncertainty into the utility calculation would have been enough to preclude lying; it doesn't need to be an external rule. My philosophical position is thus just boilerplate utilitarianism, and I would disagree with your first sentence if you took out the "immediate."
In this case, it just seems fairly obvious to me that signing this petition won't have unforeseen long term consequences that outweigh the direct benefit.
And, as I said, I think responding to Callard in the way you did is useful, even if I disagree with the framework.
↑ comment by Kenny · 2020-06-28T19:00:03.167Z · LW(p) · GW(p)
You signed the position purely out of instrumental concerns and any principles about petitions and how news organizations should or should not respond to them is entirely independent? Admitting that – even judged just instrumentally – seems counter-productive.
The relevant principle seems pretty clear (to me): of course people should be generally open to being swayed by (reasoned) argumentation, e.g. via petition – unless there's some concern(s) that override it, like a principled pre-commitment to ignore some types of influence (for very good reasons).
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-06-29T12:06:28.773Z · LW(p) · GW(p)
You signed the position purely out of instrumental concerns and any principles about petitions and how news organizations should or should not respond to them is entirely independent? Admitting that – even judged just instrumentally – seems counter-productive.
Yes. My mind didn't go there when I decided to sign, and, on reflection, I don't think it should have gone there. I'm not sure if "instrumental" is the right word, but I think we mean the same thing.
I don't think it is counter-productive. I think it's important to realize that there is nothing wrong with supporting X even if the generalized version of supporting X is something you oppose. Do you disagree with that?
Replies from: Kenny↑ comment by Kenny · 2020-07-03T19:22:35.972Z · LW(p) · GW(p)
I agree that there might not be anything wrong with supporting a specific X without also supporting (or with opposing) all X in general. But that all depends on the reasons why you support the specific X but don't support (or oppose) the general X. Why did you sign the petition but the general policy? (Also, what do you think the general policy is exactly?)
I don't personally have strong feelings or convictions pertaining to all of this. I don't want the NYT to publish Scott's full legal name, but I don't have any particular strong objections about them or anyone else doing that in general. I do oppose the specific politics that I think is motivating them publishing his name. I also don't think there are any good reasons to publish his name that aren't motivated to hurt or harm him.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-07-03T19:40:55.139Z · LW(p) · GW(p)
I agree that there might not be anything wrong with supporting a specific X without also supporting (or with opposing) all X in general. But that all depends on the reasons why you support the specific X but don't support (or oppose) the general X.
Well, in that case, I don't think there's much left to hash out here. My main point would have been that I think it's a bad idea to tie your decision to a generalizable principle.
comment by Vaniver · 2020-06-28T03:28:26.043Z · LW(p) · GW(p)
Changing sites with a reply is always a fraught business; my defense is that I don't have a Twitter account, and Twitter is horrible for longform discussion. If exactly one person wants to post a link to this post in that Twitter thread, I'd appreciate it.
Replies from: Zack_M_Daviscomment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-06-28T10:11:28.250Z · LW(p) · GW(p)
Yeah IDK. There's a slippery slope from "This is the interface this institution has with the world, so of course we should use it" to "Our enemies use tactic X, so there's nothing wrong with us using tactic X too," which then becomes "Our enemies used tactic X once, so now we are justified in using it a lot." We need to find a shelling fence or avoid the slope entirely.
Here is a brainstorm of suggestions:
--Our petition should have a clause talking about how terrible it is for the NYT to bow to mobs of enraged internet elites but that it would be hypocritical of them to choose now as their moment to grow a spine. At least this gets the right ideas across.
--We also take some action to encourage them to grow a spine, so that they become more resistant to this tactic in general in the future. That way, we are using tactic X now while making X less viable for everyone in the future.
--We don't do our petition at all, since that's an example of the tactic we dislike, but instead we do some tactic we like, such as challenging the NYT to a third-party moderated public debate on the matter, or simply raising tons of awareness about what's happening, with the goal of convincing third parties of the rightness of our cause rather than the goal of directly influencing the NYT.
--We take some steps to make our petition a non-mob. Like, maybe we require that everyone who signs it restate it in their own words or something, or that everyone who signs it be someone initially skeptical who changed their mind as a result of hearing both sides.
On the question of whether we should have one:
Replies from: Vaniver, Raemon↑ comment by Vaniver · 2020-06-28T19:39:50.677Z · LW(p) · GW(p)
Our petition should have a clause talking about how terrible it is for the NYT to bow to mobs of enraged internet elites but that it would be hypocritical of them to choose now as their moment to grow a spine. At least this gets the right ideas across.
Another way to look at this is that it's offered information; our culture has some rules, and their culture has some rules, and they're proposing a massive rule violation in our culture, and in the interest of mutual understanding we're telling them that we would view it as hostile.
Now, you might say "this is a symmetric weapon!"; the people who claimed that Bennet's decision to print Tom Cotton's op-ed was a massive rule violation in their culture are doing basically the same thing. I reply that we have to represent our culture [LW · GW] if it want it to be present; competing views are more reason to defend the core principles of our society, not less.
[Of course, I am not arguing for doing anything against your conscience, except insofar as I think your conscience is mistaken about what should be unethical.]
We take some steps to make our petition a non-mob. Like, maybe we require that everyone who signs it restate it in their own words or something, or that everyone who signs it be someone initially skeptical who changed their mind as a result of hearing both sides.
Petitions allow for intellectual specialization of labor; specialists create a position, and then others choose whether or not to sign on. This allows for compression and easy communication; forcing everyone to restate it taxes participation and makes the result harder to comprehend. (Suppose many of the comments actually include disagreement with planks of the petition; how then should it be interpreted?)
Similarly, restricting it to people who are "initially skeptical" is selection on beliefs, not methodology, and is adverse selection (as people who initially picked the right answer are now barred).
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-06-28T23:02:53.454Z · LW(p) · GW(p)
I signed the petition myself. I think our disagreement is smaller than it seems. I think partly my concern is that this is a symmetric weapon, but partly it's simply what I said: There is a slippery slope; we would do well to think about fences. Does our culture have a clearly defined fence on this slope already? If so, I'm not aware of it.
↑ comment by Raemon · 2020-06-29T16:30:24.645Z · LW(p) · GW(p)
Our petition should have a clause talking about how terrible it is for the NYT to bow to mobs of enraged internet elites but that it would be hypocritical of them to choose now as their moment to grow a spine. At least this gets the right ideas across.
Something in this space feels approximately right to me. (This feels supererogatory rather than obligatory, and I think it is more important to be able to defend yourself than to get all the nuances exactly right. But, it is good to look for ways to defend yourself that also improve civilizational norms on the margin)
comment by Dagon · 2020-06-29T17:07:02.827Z · LW(p) · GW(p)
I think I have different expectations when engaging in reasoned discourse than when publishing to an un-responsive semi-entity (the NYT is not a reasoning agent, it's an institution comprising both individual and shared history and decision-making). I also think that BOTH the object-level and the overall principles are important, and communication should show how they align.
I would strongly prefer that the NYT _not_ publish unnecessary and harmful identifying information about people who don't want it. That applies to everyone - list them by their public, common moniker, not necessarily their legal name. I would separately like Scott to feel safe in continuing to publish his excellent works. These are in complete alignment.
The tactics for focusing some organizational attention on the issue are varied, but I agree they're ambiguous between a petulant demand for this specific object result, and an altruistic punishment to bring attention to a harmful (IMO) overall policy. I hope the NYT is a sane enough organization to take the useful parts and ignore the harmful ones. And I don't think our group is big or powerful enough that we're going to force the NYT into any unacceptable (to them) solution.
tl;dr: the document doesn't need to say explicitly that the NYT should make it's own reasonable decisions, as that's implicit and required by the position the NYT holds. That's the very nature of groups protesting against large organizations - the organization gets to and has to decide how and whether to change their behavior, and 'FU' is a valid and not-uncommon response.
comment by kithpendragon · 2020-06-28T15:49:03.955Z · LW(p) · GW(p)
I was able to read the op-ed in a private window (I don't have a subscription to the Times anyway). The article is written in the context of a petition "opposing the deplatforming of philosophers on the basis of their views on sex and gender." Callard chose not to sign. She argues herself back and forth a few times about why before settling on the opinion that philosophers should not engage in political behavior (such as petitioning) to convince each other about the ethics of their profession because doing so is unprofessional in the context of academic philosophy, a field that she asserts must remain dedicated to "belief acquisition [that is committed to being] intellectually honest, conducive to knowledge, nonaggressive, inquisitive, respectful."
comment by kithpendragon · 2020-06-28T11:43:45.569Z · LW(p) · GW(p)
News outlets would probably tell you their interface is facts. Seems to me that "facts" and "carefully reasoned argument" should be compatible modes.
When using the public interface explicitly presented by an institution fails to produce any apparent effect, it seems reasonable to try another. NYT also has an obvious corporate interface (purchase and sale of articles) and, as a publisher, a related political interface (exchange of news for views provides a vote-like structure on the content of the stories that the corporation would probably respond to in the absence of other factors).
Unfortunately, this corporate/political interface is a slow and post-hoc way of communicating with a large organization. The traditional actions when seeking a rapid response from a corporate or political entity are collective bargaining and petition respectively. Formally unionizing doesn't seem immediately useful in this situation.