Posts

Metaphorical extensions and conceptual figure-ground inversions 2019-07-24T06:21:54.487Z · score: 34 (9 votes)
Dialogue on Appeals to Consequences 2019-07-18T02:34:52.497Z · score: 27 (19 votes)
Why artificial optimism? 2019-07-15T21:41:24.223Z · score: 49 (16 votes)
The AI Timelines Scam 2019-07-11T02:52:58.917Z · score: 47 (76 votes)
Self-consciousness wants to make everything about itself 2019-07-03T01:44:41.204Z · score: 43 (30 votes)
Writing children's picture books 2019-06-25T21:43:45.578Z · score: 110 (35 votes)
Conditional revealed preference 2019-04-16T19:16:55.396Z · score: 18 (7 votes)
Boundaries enable positive material-informational feedback loops 2018-12-22T02:46:48.938Z · score: 30 (12 votes)
Act of Charity 2018-11-17T05:19:20.786Z · score: 153 (59 votes)
EDT solves 5 and 10 with conditional oracles 2018-09-30T07:57:35.136Z · score: 61 (18 votes)
Reducing collective rationality to individual optimization in common-payoff games using MCMC 2018-08-20T00:51:29.499Z · score: 58 (18 votes)
Buridan's ass in coordination games 2018-07-16T02:51:30.561Z · score: 55 (19 votes)
Decision theory and zero-sum game theory, NP and PSPACE 2018-05-24T08:03:18.721Z · score: 109 (36 votes)
In the presence of disinformation, collective epistemology requires local modeling 2017-12-15T09:54:09.543Z · score: 116 (42 votes)
Autopoietic systems and difficulty of AGI alignment 2017-08-20T01:05:10.000Z · score: 3 (3 votes)
Current thoughts on Paul Christano's research agenda 2017-07-16T21:08:47.000Z · score: 17 (8 votes)
Why I am not currently working on the AAMLS agenda 2017-06-01T17:57:24.000Z · score: 16 (9 votes)
A correlated analogue of reflective oracles 2017-05-07T07:00:38.000Z · score: 4 (4 votes)
Finding reflective oracle distributions using a Kakutani map 2017-05-02T02:12:06.000Z · score: 1 (1 votes)
Some problems with making induction benign, and approaches to them 2017-03-27T06:49:54.000Z · score: 3 (3 votes)
Maximally efficient agents will probably have an anti-daemon immune system 2017-02-23T00:40:47.000Z · score: 3 (3 votes)
Are daemons a problem for ideal agents? 2017-02-11T08:29:26.000Z · score: 5 (2 votes)
How likely is a random AGI to be honest? 2017-02-11T03:32:22.000Z · score: 0 (0 votes)
My current take on the Paul-MIRI disagreement on alignability of messy AI 2017-01-29T20:52:12.000Z · score: 17 (9 votes)
On motivations for MIRI's highly reliable agent design research 2017-01-29T19:34:37.000Z · score: 10 (9 votes)
Strategies for coalitions in unit-sum games 2017-01-23T04:20:31.000Z · score: 3 (3 votes)
An impossibility result for doing without good priors 2017-01-20T05:44:26.000Z · score: 1 (1 votes)
Pursuing convergent instrumental subgoals on the user's behalf doesn't always require good priors 2016-12-30T02:36:48.000Z · score: 7 (5 votes)
Predicting HCH using expert advice 2016-11-28T03:38:05.000Z · score: 3 (3 votes)
ALBA requires incremental design of good long-term memory systems 2016-11-28T02:10:53.000Z · score: 1 (1 votes)
Modeling the capabilities of advanced AI systems as episodic reinforcement learning 2016-08-19T02:52:13.000Z · score: 4 (2 votes)
Generative adversarial models, informed by arguments 2016-06-27T19:28:27.000Z · score: 0 (0 votes)
In memoryless Cartesian environments, every UDT policy is a CDT+SIA policy 2016-06-11T04:05:47.000Z · score: 12 (4 votes)
Two problems with causal-counterfactual utility indifference 2016-05-26T06:21:07.000Z · score: 3 (3 votes)
Anything you can do with n AIs, you can do with two (with directly opposed objectives) 2016-05-04T23:14:31.000Z · score: 2 (2 votes)
Lagrangian duality for constraints on expectations 2016-05-04T04:37:28.000Z · score: 1 (1 votes)
Rényi divergence as a secondary objective 2016-04-06T02:08:16.000Z · score: 2 (2 votes)
Maximizing a quantity while ignoring effect through some channel 2016-04-02T01:20:57.000Z · score: 2 (2 votes)
Informed oversight through an entropy-maximization objective 2016-03-05T04:26:54.000Z · score: 0 (0 votes)
What does it mean for correct operation to rely on transfer learning? 2016-03-05T03:24:27.000Z · score: 4 (4 votes)
Notes from a conversation on act-based and goal-directed systems 2016-02-19T00:42:29.000Z · score: 6 (4 votes)
A scheme for safely handling a mixture of good and bad predictors 2016-02-17T05:35:55.000Z · score: 0 (0 votes)
A possible training procedure for human-imitators 2016-02-16T22:43:52.000Z · score: 2 (2 votes)
Another view of quantilizers: avoiding Goodhart's Law 2016-01-09T04:02:26.000Z · score: 3 (3 votes)
A sketch of a value-learning sovereign 2015-12-20T21:32:45.000Z · score: 11 (2 votes)
Three preference frameworks for goal-directed agents 2015-12-02T00:06:15.000Z · score: 4 (2 votes)
What do we need value learning for? 2015-11-29T01:41:59.000Z · score: 3 (3 votes)
A first look at the hard problem of corrigibility 2015-10-15T20:16:46.000Z · score: 10 (3 votes)
Conservative classifiers 2015-10-02T03:56:46.000Z · score: 2 (2 votes)
Quantilizers maximize expected utility subject to a conservative cost constraint 2015-09-28T02:17:38.000Z · score: 3 (3 votes)

Comments

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-20T03:20:25.600Z · score: 2 (1 votes) · LW · GW

That does seem right, actually.

Now that I think about it, due to this cognitive architecture issue, she actually does gain new information. If she sees a red apple in the future, she can know that it's red (because it produces the same qualia as the first red apple), whereas she might be confused about the color if she hadn't seen the first apple.

I think I got confused because, while she does learn something upon seeing the first red apple, it isn't the naive "red wavelengths are red-quale", it's more like "the neurons that detect red wavelengths got wired and associated with the abstract concept of red wavelengths." Which is still, effectively, new information to Mary-the-cognitive-system, given limitations in human mental architecture.

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-20T03:00:20.619Z · score: 8 (3 votes) · LW · GW

There is a qualitative redness to red. I get that intuition.

I think "Mary's room is uninteresting" is wrong; it's uninteresting in the case of robot scientists, but interesting in the case of humans, in part because of what it reveals about human cognitive architecture.

I think in the human case, I would see Mary seeing a red apple as gaining in expressive vocabulary rather than information. She can then describe future things as "like what I saw when I saw that first red apple". But, in the case of first seeing the apple, the redness quale is essentially an arbitrary gensym.

I suppose I might end up agreeing with the illusionist view on some aspects of color perception, then, in that I predict color quales might feel like new information when they actually aren't. Thanks for explaining.

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-20T01:56:55.089Z · score: 2 (1 votes) · LW · GW

Mary's room seems uninteresting, in that robot-Mary can predict pretty well what bit-pattern she's going to get upon seeing color. (To the extent that the human case is different, it's because of cognitive architecture constraints)

Regarding the zombie argument: The robots have uncertainty over the bridge laws. Under this uncertainty, they may believe it is possible that humans don't have experiences, due to the bridge laws only identifying silicon brains as conscious. Then humans would be zombies. (They may have other theories saying this is pretty unlikely / logically incoherent / etc)

Basically, the robots have a primitive entity "my observations" that they explain using their theories. They have to reconcile this with the eventual conclusion they reach that their observations are those of a physically instantiated mind like other minds, and they have degrees of freedom in which things they consider "observations" of the same type as "my observations" (things that could have been observed).

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-20T01:30:09.197Z · score: 4 (2 votes) · LW · GW

It seems that doubting that we have observations would cause us to doubt physics, wouldn't it? Since physics-the-discipline is about making, recording, communicating, and explaining observations.

Why think we're in a physical world if our observations that seem to suggest we are are illusory?

This is kind of like if the people saying we live in a material world arrived at these theories through their heaven-revelations, and can only explain the epistemic justification for belief in a material world by positing heaven. Seems odd to think heaven doesn't exist in this circumstance.

(Note, personally I lean towards supervenient neutral monism: direct observation and physical theorizing are different modalities for interacting with the same substance, and mental properties supervene on physical ones in a currently-unknown way. Physics doesn't rule out observation, in fact it depends on it, while itself being a limited modality, such that it is unsurprising if you couldn't get all modalities through the physical-theorizing modality. This view seems non-contradictory, though incomplete.)

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-20T01:22:04.335Z · score: 2 (1 votes) · LW · GW

Robots take in observations. They make theories that explain their observations. Different robots will make different observations and communicate them to each other. Thus, they will talk about observations.

After making enough observations they make theories of physics. (They had to talk about observations before they made low-level physics theories, though; after all, they came to theorize about physics through their observations). They also make bridge laws explaining how their observations are related to physics. But, they have uncertainty about these bridge laws for a significant time period.

The robots theorize that humans are similar to them, based on the fact that they have functionally similar cognitive architecture; thus, they theorize that humans have observations as well. (The bridge laws they posit are symmetric that way, rather than being silicon-chauvinist)

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-19T22:43:32.790Z · score: 5 (3 votes) · LW · GW

Thanks for the elaboration. It seems to me that experiences are:

  1. Hard-to-eff, as a good-enough theory of what physical structures have which experiences has not yet been discovered, and would take philosophical work to discover.

  2. Hard to reduce to physics, for the same reason.

  3. In practice private due to mind-reading technology not having been developed, and due to bandwidth and memory limitations in human communication. (It's also hard to imagine what sort of technology would allow replicating the experience of being a mouse)

  4. Pretty directly apprehensible (what else would be? If nothing is, what do we build theories out of?)

It seems natural to conclude from this that:

  1. Physical things exist.
  2. Experiences exist.
  3. Experiences probably supervene on physical things, but the supervenience relation is not yet determined, and determining it requires philosophical work.
  4. Given that we don't know the supervenience relation yet, we need to at least provisionally have experiences in our ontology distinct from physical entities. (It is, after all, impossible to do physics without making observations and reporting them to others)

Is there something I'm missing here?

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-19T22:04:23.800Z · score: 4 (2 votes) · LW · GW

What's the difference between making claims about nearby objects and making claims about qualia (if there is one)? If I say there's a book to my left, is that saying something about qualia? If I say I dreamt about a rabbit last night, is that saying something about qualia?

(Are claims of the form "there is a book to my left" radically incorrect?)

That is, is there a way to distinguish claims about qualia from claims about local stuff/phenomena/etc?

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-19T21:30:16.845Z · score: 3 (2 votes) · LW · GW

What are you experiencing right now? (E.g. what do you see in front of you? In what sense does it seem to be there?)

Comment by jessica-liu-taylor on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T02:37:00.765Z · score: 30 (8 votes) · LW · GW

Good epistemics says: If X, I desire to believe X. If not-X, I desire to believe not-X.

This holds even when X is "Y person did Z thing" and Z is norm-violating.

If you don't try to explicitly believe "Y person did Z thing" in worlds where in fact Y person did Z thing, you aren't trying to have good epistemics. If you don't say so where it's relevant (and give a bogus explanation instead), you're demonstrating bad epistemics. (This includes cases of saying a mistake theory where a conflict theory is correct)

It's important to distinguish good epistemics (having beliefs correlated with reality) with the aesthetic that claims credit for good epistemics (e.g. the polite academic style).

Don't conflate politeness with epistemology. They're actually opposed in many cases!

Comment by jessica-liu-taylor on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T02:31:59.639Z · score: 10 (2 votes) · LW · GW

Does the AI survey paper say experts are biased in any direction? (I didn't see it anywhere)

Is there an accusation of violation of existing norms (by a specific person/organization) you see "The AI Timelines Scam" as making? If so, which one(s)?

Comment by jessica-liu-taylor on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-15T01:48:28.990Z · score: 23 (7 votes) · LW · GW

Suppose Carol is judging a debate between Alice and Bob. Alice says "X, because Y". Bob acknowledges the point, but argues "actually, a stronger reason for believing not-X is Z". Alice acts like she doesn't understand the point. Bob tries explaining in other words, without success.

Carol, following your advice, says: "Alice made a clear point in favor of X. Bob failed to make a clear point against X." Therefore, she judges the debate outcome to be in favor of X.

However, this is Carol abdicating her responsibility to use her own judgment of how clear Bob's point was. Maybe it is really clear to Carol, and to a hypothetical "reasonable person" (significantly less smart than Carol), that Z is a good reason to believe not-X. Perhaps Z is actually a very simple logical argument. And so, the debate outcome is misleading.

The thing is that in any judgment of clarity, one of the people involved is the person making that judgment; and, they are obligated to use their own reasoning, not only to see whether the point was understood by others. You can't define clarity by whether someone else understood the point, you have to judge it for yourself as well. (Of course, after making your own judgment about how clear the point was, you can define the statement's clarity as whether you judged it to be clear, but this is tautological)

Comment by jessica-liu-taylor on Matthew Barnett's Shortform · 2019-08-12T22:07:59.867Z · score: 10 (6 votes) · LW · GW

I would name the following:

Comment by jessica-liu-taylor on Power Buys You Distance From The Crime · 2019-08-12T18:15:37.144Z · score: 19 (7 votes) · LW · GW

Would you correct your response so?

Perhaps (75% chance?), in part because I've spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.

It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing "Does mistake theory explain this case well?")

It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I've observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others' statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may even be considered as a special case) of the question of why people are misled by propaganda, even when there is some evidence that the propaganda is propaganda; see Gell-Mann amnesia)

Comment by jessica-liu-taylor on Power Buys You Distance From The Crime · 2019-08-12T06:32:28.259Z · score: 25 (7 votes) · LW · GW

Consider a situation where:

  • People are discussing phenomenon X.
  • In fact, a conflict theory is a good explanation for phenomenon X.
  • However, people only state mistake theories for X, because conflict theories are taboo.

Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?

Comment by jessica-liu-taylor on Power Buys You Distance From The Crime · 2019-08-12T06:27:13.135Z · score: 35 (12 votes) · LW · GW

I'm going to try explaining my view and how it differs from the "politics is the mind killer" slogan.

  • People who are good at talking about conflict, like Robin Hanson, can do it in a way that improves the ability for people to further talk rationally about conflict. Such discussions are not only not costly, they're the opposite of costly.
  • Some people (most people?) are bad at talking about conflict. They're likely to contribute disinformation to these discussions. The discussions may or may not be worth having, but, it's not surprising if high-disinformation conversations end up quite costly.
  • My view: people who are actually trying can talk rationally enough about conflict for it to be generally positive. The issue is not a question of ability so much as a question of intent-alignment. (Though, getting intent aligned could be thought of as a kind of skill). (So, I do think political discussions generally go well when people try hard to only say true things!)
  • Why would I believe this? The harms from talking about conflict aren't due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they're due to people enacting conflict in the course of discussing conflict, rather than using denotative speech.
  • Yes, I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does. (Are errors random, or do they favor fighting on a given side / appeasing local power structures / etc?)
  • Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then "politics is the mind-killer" is the wrong framing. Rather, "politics is a domain where people often try to kill each other's minds" is closer.
  • In such a circumstance, building models of which optimization pressures are harming discourse in which ways is highly useful, and actually critical for social modeling. (As I said in my previous content, it's strictly positive for an epistemic community to have better information about the degree of trustworthiness of different information systems)
  • If you see people making conflict theory models, and those models seem correct to you (or at least, you don't have any epistemic criticism of them), then shutting down the discussions (on the basis that they're conflict-theorist) is actively doing harm to this model-building process. You're keeping everyone confused about where the adversarial optimization pressures are. That's like preventing people from turning on the lights in a room that contains monsters.
  • Therefore, I object to talking about conflict theory models as "inherently costly to talk about" rather than "things some (not all!) people would rather not be talked about for various reasons". They're not inherently costly. They're costly because some optimization pressures are making them costly. Modeling and opposing (or otherwise dealing with) these is the way out. Insisting on epistemic discourse even when such discourse is about conflict is a key way of doing so.
Comment by jessica-liu-taylor on Power Buys You Distance From The Crime · 2019-08-11T05:43:33.085Z · score: 19 (6 votes) · LW · GW

My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.

"Talking about conflict is a limited resource" seems very, very off to me.

There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It's strictly better to have more of each of them.

If Bob deceives me,

I desire to believe that Bob deceives me;

If Bob does not deceive me,

I desire to believe that Bob does not deceive me;

Let me not become attached to beliefs I may not want.

Talking about conflict in ways that are wrong is damaging a resource (it's causing people to have incorrect beliefs). Using clickbaity conflict-y titles without corresponding evidence is spending a resource (attention). Talking about conflict informatively/accurately is not spending a resource, it's producing a resource.

EDIT: also note, informative discussion of conflict, such as in Robin Hanson's work, makes it easier to talk informatively about conflict in the future, as it builds up theoretical framework and familiarity. Which means "talking about conflict is a limited resource" is backwards.

Comment by jessica-liu-taylor on Power Buys You Distance From The Crime · 2019-08-04T18:39:04.709Z · score: 16 (5 votes) · LW · GW

Quoting Scott's post:

Mistake theorists treat politics as science, engineering, or medicine. The State is diseased. We’re all doctors, standing around arguing over the best diagnosis and cure. Some of us have good ideas, others have bad ideas that wouldn’t help, or that would cause too many side effects.

Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.

Part of what seems strange about drawing the line at denotative vs. enactive speech is that there are conflict theorists who can speak coherently/articulately in a denotative fashion (about conflict), e.g.:

It seems both coherent and consistent with conflict theory to believe "some speech is denotative and some speech is enacting conflict."

(I do see a sense in which mechanism design is a mistake theory, in that it assumes that deliberation over the mechanism is possible and desirable; however, once the mechanism is in place, it assumes agents never make mistakes, and differences in action are due to differences in values)

Comment by jessica-liu-taylor on Power Buys You Distance From The Crime · 2019-08-04T02:50:24.447Z · score: 16 (7 votes) · LW · GW

Like, from an economic viewpoint, there’s no reason why “the main driver of disagreement is self-interest” would lead to arguing that public choice theory is racist, which was one of Scott’s original examples.

I don't share this intuition. The Baffler article argues:

IN DECEMBER 1992, AN OBSCURE ACADEMIC JOURNAL published an article by economists Alexander Tabarrok and Tyler Cowen, titled “The Public Choice Theory of John C. Calhoun.” Tabarrok and Cowen, who teach in the notoriously libertarian economics department at George Mason University, argued that the fire-breathing South Carolinian defender of slaveholders’ rights had anticipated “public choice theory,” the sine qua non of modern libertarian political thought.

...

Astutely picking up on the implications of Buchanan’s doctrine, Tabarrok and Cowen enumerated the affinities public choice shared with Calhoun’s fiercely anti-democratic political thought. Calhoun, like Buchanan a century and a half later, had theorized that majority rule tended to repress a select few. Both Buchanan and Calhoun put forward ideas meant to protect an aggrieved if privileged minority. And just as Calhoun argued that laws should only be approved by a “concurrent majority,” which would grant veto power to a region such as the South, Buchanan posited that laws should only be made by unanimous consent. As Tabarrok and Cowen put it, these two theories had “the same purpose and effect”: they oblige people with different interests to unite—and should these interested parties fail to achieve unanimity, government is paralyzed.

In marking Calhoun’s political philosophy as the crucial antecedent of public choice theory, Tabarrok and Cowen unwittingly confirmed what critics have long maintained: libertarianism is a political philosophy shot through with white supremacy. Public choice theory, a technical language nominally about human behavior and incentives, helps ensure that blacks remain shackled.

...

In her 2017 book, Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America, historian Nancy MacLean argues that Buchanan developed his ideas in service of a Virginia elite hell-bent on preserving Jim Crow.

The overall argument is something like:

  • Calhoun and Buchanan both had racist agendas (maintaining slavery and segregation). (They may have these agendas due to some combination of personal self-interest and class self-interest)
  • They promoted ideas about democratic governance (e.g. that majority rule is insufficient) that were largely motivated by these agendas.
  • These ideas are largely the same as the ones of public choice theory (as pointed out by Cowen and Tabarrok)
  • Therefore, it is likely that public choice theory is advancing a racist agenda, and continues being advocated partially for this reason.

Overall, this is an argument that personal self-interest, or class self-interest, are driving the promotion of public choice theory. (Such interests and their implications could be studied within economics; though, economics typically avoids discussing group interests except in the context of discrete organizational units such as firms)

Another way of looking at this is:

  • Economics, mechanism design, public choice theory, etc are meta-level theories about how to handle conflicts of interest.
  • It would be desirable to have agreement on good meta-level principles in order to resolve object-level conflicts.
  • However, the choice of meta-level principles (and, the mapping between those principles and reality) is often itself political or politicized.
  • Therefore, there will be conflicts over these meta-level principles.
Comment by jessica-liu-taylor on Power Buys You Distance From The Crime · 2019-08-03T17:50:42.802Z · score: 13 (8 votes) · LW · GW

The whole question of the essay is basically “who should we be angry at”?

While the post has a few sentences about moral blame, the main thesis is that power allows people to avoid committing direct crime while having less-powerful people commit those crimes instead (and hiding this from the powerful people). This is a denotative statement that can be evaluated independent of "who should we be angry at".

Such denotative statements are very useful when considering different mechanisms for resolving principal-agent problems. Mechanism design is, to a large extent, a conflict theory, because it assumes conflicts of interest between different agents, and is determining what consequences should happen to different agents, e.g. in some cases "who we should be angry at" if that's the best available implementation.

Comment by jessica-liu-taylor on Drive-By Low-Effort Criticism · 2019-08-01T01:14:39.994Z · score: 11 (13 votes) · LW · GW

What is the evaluation of "effort" even doing here? Why not just evaluate whether the criticism is high-quality, understands the post, is correct, etc?

Requiring "effort" (independent of quality) is a proof-of-work scheme meant to tax criticism.

Comment by jessica-liu-taylor on What woo to read? · 2019-07-29T19:53:18.941Z · score: 23 (9 votes) · LW · GW

Here's some of the mysticism ("woo" or not) that I've found helpful:

Taoism:

Buddhism:

Tantra:

Comment by jessica-liu-taylor on Dialogue on Appeals to Consequences · 2019-07-25T17:39:38.202Z · score: 4 (2 votes) · LW · GW

I shouldn't have to argue about the object-level political consequences of 1+4=5 in a post arguing exactly that. This is the analytic synthetic distinction / logical uncertainty / etc.

Yes, I could have picked a better less political example, as recommended in Politics is the Mind Killer. In retrospect, that would have caused less confusion.

Anyway, Evan has the option of commenting on my AI timelines post, open thread, top level post, shortform, etc.

Comment by jessica-liu-taylor on Dialogue on Appeals to Consequences · 2019-07-25T09:38:29.623Z · score: 3 (3 votes) · LW · GW

This is a fictional dialogue demonstrating a meta-level point about how discourse works, and your comment is pretty off-topic. If you want to comment on my AI timelines post, do that (although you haven't read it so I don't even know which of my content you're trying to comment on).

Comment by jessica-liu-taylor on Metaphorical extensions and conceptual figure-ground inversions · 2019-07-25T00:05:18.297Z · score: 6 (3 votes) · LW · GW

The concept cloning you talk about is definitely a thing. It's a good fit for the river/glacier example.

I think there is a general form of the "river of X" concept, such that people know what you mean if you say "river of (new substance)" (and can reuse their intuitions about other rivers(2)), but this form isn't named.

There are cases where "river" is used metaphorically without modifying it to "river of X". Consider: "The glacier is a river flowing slowly down the mountain". It's poetic/metaphorical language, not considered to be literally true, but considered to be true in some metaphorical sense.

The water/ice example is a case where ice is considered to be, literally, a kind of water. In the case of adoptive/biological parents, the "parent" concept is metaphorically extended to include adoptive parents, with the original being renamed to "biological parent". These are examples of the kind of extensions I'm talking about, where the extended one becomes a canonical concept.

Comment by jessica-liu-taylor on Raemon's Scratchpad · 2019-07-24T04:18:21.906Z · score: 3 (2 votes) · LW · GW

It's still outlawing in the sense of outlawing certain chess moves, and in the sense of law thinking.

Here's one case:

A: X.

B: That's a relevant point, but I think saying X is bad for Y reason, and would like to talk about that.

A: No, let's continue the other conversation / Ok, I don't think saying X is bad for Z reason / Let's first figure out why X is true before discussing whether saying X is bad

Here's another:

A: X.

B: That's bad to say, for Y reason.

A: That's an appeal to consequences. It's a topic change.

B: Okay, I retract that / Ok, I am not arguing against X but would like to change the topic to whether saying X is bad

There aren't fully formal rules for this (this website isn't formal debate). The point is the structural issue of what kind of "move in the game" it is to say that saying X is bad.

Comment by jessica-liu-taylor on Raemon's Scratchpad · 2019-07-23T16:25:11.011Z · score: 2 (1 votes) · LW · GW

You're not interpreting me correctly if you think I'm saying bringing up posaible consequences is banned. My claim is more about what the rules of the game should be such that degenerate strategies don't win. If, in a chess game, removing arbitrary pieces of your opponent is allowed (by the rules of the game), then the degenerate strategy "remove the opponent's king" wins. That doesn't mean that removing your opponent's king (e.g. to demonstrate a possibility or as a joke) is always wrong. But it's understood not to be a legal move. Similarly, allowing appeals to consequences to be accepted as arguments lets the degenerate strategy "control the conversation by insinuating that the other person is doing something morally wrong" to win. Which doesn't mean you can't bring up consequences, it's just "not a valid move" in the original conversation. (This could be implemented different ways; standard boilerplate is one way, but it's likely enough if nearly everyone understands why this is an invalid move)

Comment by jessica-liu-taylor on Raemon's Scratchpad · 2019-07-23T07:03:24.336Z · score: 5 (3 votes) · LW · GW

For example you could talk about “whether it’s a good idea to say X” until that matter is settled, and then return to the original topic.

This is what is critiqued in the dialogue. It makes silencing way too easy. I want to make silencing hard.

The core point is that appeals to consequences aren't arguments, they're topic changes. It's fine to change topic if everyone consents. (So, bringing up "I think saying X is bad, we can talk about that or could continue this conversation" is acceptable)

Comment by jessica-liu-taylor on Raemon's Scratchpad · 2019-07-22T16:45:24.481Z · score: 6 (4 votes) · LW · GW
  1. Intense truth seeking spaces aren't for everyone. Growing the forum is not a strict positive. An Archipelago-type model may be useful, but I'm not confident whether it's worth it.

  2. There are techniques (e.g. focusing, meditation) for helping people process their emotions, which can be taught.

  3. Some politeness norms are acceptable (e.g. most insults that are about people's essential characteristics are not allowed), as long as these norms are compatible with a sufficiently high level of truthseeking to reach the truth on difficult questions including ones about adversarial dynamics.

  4. Giving advice to people is fine if it doesn't derail the discussion and it's optional to them whether they follow it (e.g. in an offline discussion after the original one). "Whether it's a good idea to say X" isn't a banned topic, the concern is that it gets brought up in a conversation where X is relevant (as if it's an argument against X) in a way that derails the discussion.

Comment by jessica-liu-taylor on Raemon's Scratchpad · 2019-07-22T08:58:09.618Z · score: 6 (3 votes) · LW · GW

This seems right, and I don't think this contradicts what I said. It can simultaneously be the case that their feelings are false (in the sense that they aren't representative of the actual situation) and that telling them that their feelings are false is going to make the situation worse.

Comment by jessica-liu-taylor on Raemon's Scratchpad · 2019-07-22T05:58:50.049Z · score: 8 (5 votes) · LW · GW

This seems exactly right to me. The main thing that annoys me is people using their feelings of defensiveness "as an argument" that I'm doing something wrong by saying the things that seem true/relevant, or that the things I'm saying are not important to engage with, instead of taking responsibility for their defensiveness. If someone can say "I feel defensive" and then do introspection on why, such that that reason can be discussed, that's very helpful. "I feel defensive and have to exit the conversation in order to reflect on this" is likely also helpful, if the reflection actually happens, especially if the conversation can continue some time after that (if it's sufficiently important). (See also feeling rational; feelings are something like "true/false" based on whether the world-conditions that would make the emotion representative pertain or not.)

Comment by jessica-liu-taylor on Dialogue on Appeals to Consequences · 2019-07-21T21:43:53.577Z · score: 2 (1 votes) · LW · GW

The site as a whole.

I wasn't around in early LW, so this is hard for me to estimate. My very, very rough guess is 5x. (Note, IMO the recent good content is disproportionately written by people willing to talk about adversarial optimization patterns in a somewhat-forceful way despite pressures to be diplomatic)

Comment by jessica-liu-taylor on Dialogue on Appeals to Consequences · 2019-07-21T09:56:58.684Z · score: 2 (1 votes) · LW · GW
  1. Yes.

  2. Think about why that is and adjust strategy and norms correspondingly. (Sorry that's underspecified, but it actually depends on the reasons). I don't know what happened to LW1, but it did have pretty high intellectual generativity for a while.

Comment by jessica.liu.taylor on [deleted post] 2019-07-20T03:08:27.089Z

I'm first going to summarize what I think you think:

  • $Billions are at stake.
  • People/organizations are giving public narratives about what they're doing, including ones that affect the $billions.
  • People/organizations also have narratives that function for maintaining a well-functioning, cohesive community.
  • People criticize these narratives sometimes. These criticisms have consequences.
  • Consequences include: People feel the need to defend themselves. People might lose funding for themselves or their organization. People might fall out of some "ingroup" that is having the important discussions. People might form coalitions that tear apart the community. The overall trust level in the community, including willingness to take the sensible actions that would be implied by the community narrative, goes down.
  • That doesn't mean criticism of such narratives is always bad. Sometimes, it can be done well.
  • Criticisms are important to make if the criticism is really clear and important (e.g. the criticism of ACE). Then, people can take appropriate action, and it's clear what to do. (See strong and clear evidence)
  • Criticisms are potentially destructive when they don't settle the matter. These can end up reducing cohesion/trust, splitting the community, tarnishing reputations of people who didn't actually do something wrong, etc.
  • These non-matter-settling criticisms can still be important to make. But, they should be done with sensitivity to the political dynamics involved.
  • People making public criticisms willy-nilly would lead to a bunch of bad effects (already mentioned). There are standards for what makes a good criticism, where "it's true/well-argued" is not the only standard. (Other standards are: is it clear, is it empathetic, did the critic try other channels first, etc)
  • It's still important to get to the truth, including truths about adversarial patterns. We should be doing this by thinking about what norms get at these truths with minimum harm caused along the way.

Here's a summary of what I think (written before I summarized what you thought):

  • The fact that $billions are at stake makes reaching the truth in public discussions strictly more important than for a philosophy club. (After all, these public discussions are affecting the background facts that private discussions, including ones that distribute large amounts of money, assume)
  • The fact that $billions are at stake increases the likelihood of obfuscatory action compared to in a philosophy club.
  • The "level one" thing to do is to keep using philosophy club norms, like old-LessWrong. Give reasons for thinking what you think. Don't make appeals to consequences or shut people up for saying inconvenient things; argue at the object level. Don't insult people. If you're too sensitive to hear the truth, that's for the most part your problem, with some exceptions (e.g. some personal insults). Mostly don't argue about whether the other people are biased/adversarial, and instead make good object-level arguments (this could be stated somewhat misleadingly as "assume good faith"). Have public debates, possibly with moderators.
  • A problem with "level one" norms is that they rarely talk about obfuscatory action. "Assume good faith", taken literally, implies obfuscation isn't happening, which is false given the circumstances (including monetary incentives). Philosophy club norms have some security flaws.
  • The "level two" thing to do is to extend philosophy club norms to handle discussion of adversarial action. Courts don't assume good faith; it would be transparently ridiculous to do so.
  • Courts blame and disproportionately punish people. We don't need to do this here, we need the truth to be revealed one way or another. Disproportionate punishments make people really defensive and obfuscatory, understandably. (Law fought fraud, and fraud won)
  • So, "level two" should develop language for talking about obfuscatory/destructive patterns of social action that doesn't disproportionately punish people just for getting caught up in them. (Note, there are some "karmic" consequences for getting caught up in these dynamics, like having the organization be less effective and getting a reputation for being bad at resisting social pressure, but these are very different from the disproportionate punishments typical of the legal system, which punish disproportionately on the assumption that most crime isn't caught)
  • I perceive a backslide from "level one" norms, towards more diplomatic norms, where certain things are considered "rude" to say and are "attacking people", even if they'd be accepted in philosophy club. I think this is about maintaining power illegitimately.

Here are more points that I thought of after summarizing your position:

  • I actually agree that individuals should be using their discernment about how and when to be making criticisms, given the political situation.
  • I worry that saying certain ways of making criticisms are good/bad results in people getting silenced/blamed even when they're saying true things, which is really bad.
  • So I'm tempted to argue that the norms for public discussion should be approximately "that which can be destroyed by the truth should be", with some level of privacy and politeness norms, the kind you'd have in a combination of a philosophy club and a court.
  • That said, there's still a complicated question of "how do you make criticisms well". I think advice on this is important. I think the correct advice usually looks more like advice to whistleblowers than advice for diplomacy.

Note, my opinion of your opinions, and my opinions, are expressed in pretty different ontologies. What are the cruxes?

Suppose future-me tells me that I'm pretty wrong, and actually I'm going about doing criticisms the wrong way, and advocating bad norms for criticism, relative to you. Here are the explanations I come up with:

  • "Scissor statements" are actually a huge risk. Make sure to prove the thing pretty definitively, or there will be a bunch of community splits that make discussion and cooperation harder. Yes, this means people are getting deceived in the meantime, and you can't stop that without causing worse bad consequences. Yes, this means group epistemology is really bad (resembling mob behavior), but you should try upgrading that a different way.
  • You're using language that implies court norms, but courts disproportionately punish people. This language is going to increase obfuscatory behavior way more than it's worth, and possibly result in disproportionate punishments. You should try really, really hard to develop different language. (Yes, this means some sacrifice in how clear things can be and how much momentum your reform movement can sustain)
  • People saying critical things about each other in public (including not-very-blamey things like "I think there's a distortionary dynamic you're getting caught up in") looks really bad in a way that deterministically makes powerful people, including just about everyone with money, stop listening to you or giving you money. Even if you get a true discourse going, the community's reputation will be tarnished by the justice process that led to that, in a way that locks the community out of power indefinitely. That's probably not worth it, you should try another approach that lets people save face.
  • Actually, you don't need to be doing public writing/criticism very much at all, people are perfectly willing to listen to you in private, you just have to use this strategy that you're not already using.

These are all pretty cruxy; none of them seem likely (though they're all plausible), and if I were convinced of any of them, I'd change my other beliefs and my overall approach.

There are a lot of subtleties here. I'm up for having in-person conversations if you think that would help (recorded / written up or not).

Comment by jessica-liu-taylor on Dialogue on Appeals to Consequences · 2019-07-19T21:42:18.051Z · score: 6 (3 votes) · LW · GW

In a math conversation, people are going to say and possibly write down a bunch of beliefs, and make arguments that some beliefs follow from each other. The conversation itself could be represented as a transcript of beliefs and arguments. The beliefs in this transcript are what I mean by "the discourse's beliefs".

Comment by jessica-liu-taylor on Dialogue on Appeals to Consequences · 2019-07-19T21:22:52.292Z · score: 10 (2 votes) · LW · GW

Yes. In practice, if people are discouraged from saying X on the basis that it might be bad to say it, then the discourse goes on believing not-X. So, the discourse itself makes an invalid step that's analogous to an appeal to consequences "if it's bad for us to think X is true then it's false".

Comment by jessica.liu.taylor on [deleted post] 2019-07-19T18:06:31.324Z

But the landscape now is very different than it was then. With billions of dollars available and at stake, what worked then can’t be the same thing as what works now.

Is this a claim that people are almost certainly going to be protecting their reputations (and also beliefs related to their reputations) in anti-epistemic ways when large amounts of money are at stake, in a way they wouldn't if they were just members of a philosophy club who didn't think much money was at stake?

This claim seems true to me. We might actually have a lot of agreement. And this matches my impression of "EA/rationality shift from 'that which can be destroyed by the truth should be' norms towards 'protect feelings' norms as they have grown and want to play nicely with power players while maintaining their own power."

If we agree on this point, the remaining disagreement is likely about the game theory of breaking the bad equilibrium as a small group, as you're saying it is.

(Also, thanks for bringing up money/power considerations where they're relevant; this makes the discussion much less obfuscated and much more likely to reach cruxes)

[Note, my impression is that the precious thing already exists among a small number of people, who are trying to maintain and grow the precious thing and are running into opposition, and enough such opposition can cause the precious thing to go away, and the precious thing is currently being maintained largely through willingness to forcefully push through opposition. Note also, if the precious thing used to exist (among people with strong stated willingness to maintain it) and now doesn't, that indicates that forces against this precious thing are strong, and have to be opposed to maintain the precious thing.]

Comment by jessica-liu-taylor on Dialogue on Appeals to Consequences · 2019-07-19T16:17:28.963Z · score: 5 (3 votes) · LW · GW

Note, I'm not arguing for a positive obligation to always inform everyone (see last few lines of dialogue), it's important for people to use their discernment sometimes.

But, in the case you mentioned, if your study really did find that a vaccine caused autism, by the logic of the dialogue, that casts doubt on the "vaccines don't cause autism and antivaxxers are wrong and harmful" belief. (Maybe you're not the only one who has found that vaccines cause autism, and other researchers are hiding it too). So, you should at least update that belief on the new evidence before evaluating consequences. (It could be that, even after considering this, the new study is likely to be a fluke, and discerning researchers will share the new study in an academic community without going to the press)

Comment by jessica-liu-taylor on Dialogue on Appeals to Consequences · 2019-07-19T16:13:21.486Z · score: 5 (3 votes) · LW · GW

This is an allegory. While I didn't have any particular real-world example in mind, my dialogue-generation was influenced by a time I had seen appeals to consequences in EA; see EA Has A Lying Problem and this comment thread. So this was one of the more salient cases of a plausible moral case for shutting down true speech.

Comment by jessica-liu-taylor on Yes Requires the Possibility of No · 2019-07-19T05:06:59.420Z · score: 2 (1 votes) · LW · GW

Note that it's a central example if you're doing agent-based modeling, as Michael points out.

Comment by jessica-liu-taylor on The AI Timelines Scam · 2019-07-18T19:32:50.118Z · score: 2 (1 votes) · LW · GW

Of course there's an important difference between lying and being wrong. It's a question of knowledge states. Unconscious lying is a case when someone says something they unconsciously know to be false/unlikely.

If the estimates are biased, you can end up with worse beliefs than you would by just using an uninformative prior. Perhaps some are savvy enough to know about the biases involved (in part because of people like me writing posts like the one I wrote), but others aren't, and get tricked into having worse beliefs than if they had used an uninformative prior.

I am not trying to punish people, I am trying to make agent-based models.

(Regarding Madoff, what you present is suggestive, but it doesn't prove that he was conscious that he had no plans to trade and was deceiving his investors. We don't really know what he was conscious of and what he wasn't.)

Comment by jessica.liu.taylor on [deleted post] 2019-07-18T17:50:50.912Z

Echoing Ben, my concern here is that you are saying things that, if taken at face value, imply more broad responsibilities/restrictions than "don't insult people in side channels". (I might even be in favor of such a restriction if it's clearly defined and consistently enforced)

Here's an instance:

Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.

This didn't specify "just side-channel consequences." Ordinary society blames people for non-side-channel consequences, too.

Here's another:

One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.

This doesn't seem to be just about side channels. It seems to be an assertion that forcing informational updates on people is violent if it's upsetting or status lowering ("forcing what you think is true on other people"). (Note, there's ambiguity here regarding "in an upsetting or status lowering way", which could be referring to side channels; but, "forcing what you think is true on other people" has no references to side channels)

Here's another:

Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).

This isn't just about side channels. There are certain things it's impolite to say directly (for a really clear illustration of this, see the movie The Invention of Lying; Zack linked to some clips in this comment). And, people are often upset by direct, frank speech.

You're saying that I'm being uncharitable by assuming you mean to restrict things other than side-channel insults. And, indeed, in the original document, you distinguished between "upsetting people through direct content" and "upsetting people through side channels". But, it seems that the things you are saying in the comment I replied to are saying people are responsible for upsetting people in a more general way.

The problem is that I don't know how to construct a coherent worldview that generates both "I'm only trying to restrict side-channel insults" and "causing people to have correct updates notwithstanding status-lowering consequences is violent." I think I made a mistake in taking the grandparent comment at face value instead of comparing it with the original document and noting the apparent inconsistency.

Comment by jessica.liu.taylor on [deleted post] 2019-07-17T08:26:35.897Z

Most of these are symmetric weapons too which don’t rely on truth.

The whole point of pro-truth norms is that only statements that are likely to be true get intersubjectively accepted, though...

This makes me think that you're not actually tracking the symmetry/asymmetry properties of different actions under different norm-sets.

Comment by jessica.liu.taylor on [deleted post] 2019-07-17T08:07:07.498Z

We are em­bod­ied, vuln­er­a­bly, fleshy be­ings with all kinds of needs and wants. Re­sul­tantly, we are af­fected by a great many things in the world be­yond the ac­cu­racy of our maps.

Related to Ben's comment chain here, there's a significant difference between minds that think of "accuracy of maps" as a good that is traded off against other goods (such as avoiding conflict), and minds that think of "accuracy of maps" as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they're conceptualized pretty differently)

That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?

When I consider things like "making the map less accurate in order to get some gain", I don't think "oh, that might be worth it, epistemic rationality isn't everything", I think "Jesus Christ you're killing everyone and ensuring we're stuck in the dark ages forever". That's, like, only a slight exaggeration. If the maps are wrong, and the wrongness is treated as a feature rather than a bug (such that it's normative to protect the wrongness from the truth), then we're in the dark ages indefinitely, and won't get life extension / FAI / benevolent world order / other nice things / etc. (This doesn't entail an obligation to make the map more accurate at any cost, or even to never profit by corrupting the maps; it's more like a strong prediction that it's extremely bad to stop the mapmaking system from self-correcting, such as by baking protection-from-the-truth into norms in a space devoted in large part to epistemic rationality.)

Comment by jessica.liu.taylor on [deleted post] 2019-07-17T07:44:55.003Z

"You're responsible for all consequences of your speech" might work as a decision criterion for yourself, but it doesn't work as a social norm. See this comment, and this post.

In other words, consequentialism doesn't work as a norm-set, it at best works as a decision rule for choosing among different norm-sets, or as a decision rule for agents already embedded in a social system.

Politeness isn't really about consequences directly; there are norms about what you're supposed to say or not say, which don't directly refer to the consequences of what you say (e.g. it's still rude to say certain things even if, in fact, no one gets harmed as a result, or the overall consequences are positive). These are implementable as norms, unlike "you are responsible for all consequences of your speech". (Of course, consideration of consequences is important in designing the politeness norms)

[EDIT: I expanded this into a post here]

Comment by jessica.liu.taylor on [deleted post] 2019-07-17T07:27:05.487Z

In my mind, this dis­cus­sion isn’t about you (the truth-speaker) should be co­erced by some out­side reg­u­lat­ing force.

Norms are outside regulating forces, though. (Otherwise, they would just be heuristics)

Comment by jessica-liu-taylor on Open Thread July 2019 · 2019-07-16T06:23:06.803Z · score: 8 (5 votes) · LW · GW

It seems to me clear that, if Commenters have a policy of reporting the maximum of their perception and 10, then there is now less mutual information between the commenter's report and the actual post quality than there was previously. In particular, you now can't distinguish between a post of quality 2 and a post of quality 5 given any number of comments, whereas you could previously.

Comment by jessica-liu-taylor on Why artificial optimism? · 2019-07-16T04:44:52.818Z · score: 4 (2 votes) · LW · GW

I don't think ethical vegetarians deal with this problem by literally remaining ignorant of what other people are eating, but rather there's a truce between ethical vegetarians and meat-eaters, involving politeness norms which make it impolite to call other people's dietary choices unethical.

I agree that at least soft rewards/punishments (such as people associating more with ethical vegetarians) are usually necessary to keep ethical principles incentive-compatible. (Since much of ethics is about finding opportunities for positive-sum trade while avoiding destructive conflicts, many such rewards come naturally)

Comment by jessica-liu-taylor on Integrity and accountability are core parts of rationality · 2019-07-16T04:40:49.312Z · score: 8 (4 votes) · LW · GW

Yes, I agree with this. (Though, of course, harm is minimized from constant principles-shifting if it's publicly declared, so no one expects the person to act consistently)

Comment by jessica-liu-taylor on Integrity and accountability are core parts of rationality · 2019-07-16T01:40:04.029Z · score: 35 (15 votes) · LW · GW

Integrity can be a double-edged sword. While it is good to judge people by the standards they expressed, it is also a surefire way to make people overly hesitant to update. If you get punished every time you change your mind because your new actions are now incongruent with the principles you explained to others before you changed your mind, then you are likely to stick with your principles for far longer than you would otherwise, even when evidence against your position is mounting.

We can distinguish two things that both fall under what you're calling integrity:

  1. Having one's current stated principles accord with one's current behavior.
  2. Maintaining the same stated principles over time.

It seems to me that, while (1) is generally virtuous, (2) is only selectively virtuous. I generally don't mind people abandoning their principles if they publicly say "well, I tried following these principles, and it didn't work / I stopped wanting to / I changed my mind about what principles are good / whatever, so I'm not following these anymore" (e.g. on Twitter). This can be quite useful to people who are tracking how possible it is to follow different principles given the social environment, including people considering adopting principles themselves. Unfortunately, principles are almost always abandoned silently.

It's worth noting that, in Judaism, because of the seriousness with which vows are treated, it is usually considered unvirtuous to make vows regularly:

The violation of both vows and oaths is considered a serious infraction in Jewish thought. While there are examples in the Bible of individuals making vows, by the rabbinic period the practice was deeply frowned upon. The Talmud states that the punishment for breaking a vow is the death of one’s children. The Shulchan Aruch explicitly warns people not to regularly make vows, and states that someone who does — even if they fulfill the vow — is called wicked and a sinner. Many observant Jews have the practice of saying b’li neder (“without a vow”) whenever they promise to do something to make explicit that they are not making a vow.

And there are rituals for dissolving vows:

Given the seriousness of oaths and vows, and the fact that Jews during some periods of history were compelled to make declarations of fealty to other religions, the rabbis developed formulas for the dissolution of vows. The best-known of these are performed in advance of the High Holidays.

Which makes sense under the view that silent abandonment of principles, without proper ritualistic recognition, is much more of a problem than abandonment of principles with proper ritualistic recognition. (Posting that you changed or broke your principles on social media seems like a fine ritual for people who don't already have one)

Comment by jessica-liu-taylor on Why artificial optimism? · 2019-07-16T01:19:07.767Z · score: 5 (3 votes) · LW · GW

My view is that people make better judgments with more information, generally (but not literally always), but not that they always make accurate judgments when they have more information. Suppressing criticism but not praise, in particular, is a move to intentionally miscalibrate/deceive the audience.