Why artificial optimism?

post by jessicata (jessica.liu.taylor) · 2019-07-15T21:41:24.223Z · LW · GW · 29 comments

This is a link post for https://unstableontology.com/2019/07/15/why-artificial-optimism/

Contents

  The parable of the gullible king
  Back to the real world
None
29 comments

Optimism bias is well-known. Here are some examples.

The parable of the gullible king

Imagine a kingdom ruled by a gullible king. The king gets reports from different regions of the kingdom (managed by different vassals). These reports detail how things are going in these different regions, including particular events, and an overall summary of how well things are going. He is quite gullible, so he usually believes these reports, although not if they're too outlandish.

When he thinks things are going well in some region of the kingdom, he gives the vassal more resources, expands the region controlled by the vassal, encourages others to copy the practices of that region, and so on. When he thinks things are going poorly in some region of the kingdom (in a long-term way, not as a temporary crisis), he gives the vassal fewer resources, contracts the region controlled by the vassal, encourages others not to copy the practices of that region, possibly replaces the vassal, and so on. This behavior makes sense if he's assuming he's getting reliable information: it's better for practices that result in better outcomes to get copied, and for places with higher economic growth rates to get more resources.

Initially, this works well, and good practices are adopted throughout the kingdom. But, some vassals get the idea of exaggerating how well things are going in their own region, while denigrating other regions. This results in their own region getting more territory and resources, and their practices being adopted elsewhere.

Soon, these distortions become ubiquitous, as the king (unwittingly) encourages everyone to adopt them, due to the apparent success of the regions distorting information this way. At this point, the vassals face a problem: while they want to exaggerate their own region and denigrate others, they don't want others to denigrate their own region. So, they start forming alliances with each other. Vassals that ally with each other promise to say only good things about each other's regions. That way, both vassals mutually benefit, as they both get more resources, expansion, etc compared to if they had been denigrating each other's regions. These alliances also make sure to keep denigrating those not in the same coalition.

While these "praise coalitions" are locally positive-sum, they're globally zero-sum: any gains that come from them (such as resources and territory) are taken from other regions. (However, having more praise overall helps the vassals currently in power, as it means they're less likely to get replaced with other vassals).

Since praise coalitions lie, they also suppress the truth in general in a coordinated fashion. It's considered impolite to reveal certain forms of information that could imply that things aren't actually going as well as they're saying it's going. Prying too closely into a region's actual state of affairs (and, especially, sharing this information) is considered a violation of privacy.

Meanwhile, the actual state of affairs has gotten worse in almost all regions, though the regions prop up their lies with Potemkin villages, so the gullible king isn't shocked when he visits the region.

At some point, a single praise coalition wins. Vassals notice that it's in their interest to join this coalition, since (as mentioned before) it's in the interests of the vassals as a class to have more praise overall, since that means they're less likely to get replaced. (Of course, it's also in their class interests to have things actually be going well in their regions, so the praise doesn't get too out of hand, and criticism is sometimes accepted) At this point, it's conventional for vassals to always praise each other and punish vassals who denigrate other regions.

Optimism isn't ubiquitous, however. There are a few strategies vassals can use to use pessimism to claim more resources. Among these are:

Pity and doomsaying could be seen as two sides of the same coin: pity claims things are going poorly (but fixably) locally, while doomsaying claims things are going poorly (but fixably) globally. However, all of these strategies are limited to a significant degree by the overall praise coalition, so they don't get out of hand.

Back to the real world

Let's relate the parable of the gullible king back to the real world.

This model raises an important question (with implications for the real world): if you're a detective in the kingdom of the gullible king who is at least somewhat aware of the reality of the situation and the distortonary dynamics, and you want to fix the situation (or at least reduce harm), what are your options?

29 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2019-07-15T21:50:52.309Z · LW(p) · GW(p)
Politeness and privacy are, in fact, largely about maintaining impressions (especially positive impressions) through coordinating against the revelation of truth.

I think this is not fully correct. I think a significant fraction of politeness norms (and professionalism norms) come from trying to provide a simple, reliable API for two parties to engage in transactions with, in a way that limits downside risk.

If I hire a plumber, I really don't want to have long philosophical discussion with my plumber, and I also don't really want them to provide commentary on my interior decoration while they are doing the plumbing. I mostly just want them to fix my plumbing, which requires me to describe my current plumbing problems, for them to ask clarifying questions, then for them to perform some units of work, and then for me to give them some money.

A lot of the constraints we put on professional interactions are I think not because we want to maintain impressions, but because we have to drastically simplify the interface of permissible speech acts to a small subset to make the interaction predictable and manageable.

I do also separately think that a significant fraction of politeness norms are about maintaining impressions, but it felt important to highlight this alternative explanation, which I think explains a lot of the same data.

comment by Said Achmiz (SaidAchmiz) · 2019-07-15T21:58:08.109Z · LW(p) · GW(p)

Optimism bias is well-known. Here are some examples.

Some of these things are not like the others. Namely:

People often think their project has an unrealistically high chance of succeeding. Why?

People often avoid looking at horrible things clearly. Why?

These things do seem like examples of optimism bias.

It’s conventional to answer the question “How are you doing?” with “well”, regardless of how you’re actually doing. Why?

People often believe that it’s inherently good to be happy, rather than thinking that their happiness level should track the actual state of affairs (and thus be a useful tool for emotional processing and communication). Why?

People often want to suppress criticism but less often want to suppress praise; in general, they hold criticism to a higher standard than praise. Why?

But these things do not at all seem like examples of optimism bias. They seem like examples of very different phenomena. (And three distinct phenomena, I would say, rather than three examples of just one thing that’s different from optimism bias.)


By the way, in case anyone doesn’t feel like clicking through to the Wikipedia link in the OP, here’s the given definition of optimism bias:

Optimism bias is a cognitive bias that causes someone to believe that they themselves are less likely to experience a negative event. It is also known as unrealistic optimism or comparative optimism.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-15T22:12:15.404Z · LW(p) · GW(p)

"How are you doing" "Well", and suppressing criticism, are both examples of optimism bias on a social scale. The social norms appear to be optimized for causing more positivity than negativity to be expressed. Thus, the socially accepted beliefs have optimism bias.

The argument about happiness is somewhat more complex. I think the functional role of happiness in a mind is to track how well things have gone recently, whether things are going better than expected, etc. So, "hacking" that to make it be high regardless of the actual situation (wireheading) would result in optimism bias. (I agree this is different in that, rather than suggesting people already have optimism bias, it suggests people are talking as if it is normative to have optimism bias)

Replies from: SaidAchmiz, Raemon
comment by Said Achmiz (SaidAchmiz) · 2019-07-15T22:36:27.632Z · LW(p) · GW(p)

“How are you doing” “Well”, and suppressing criticism, are both examples of optimism bias on a social scale. The social norms appear to be optimized for causing more positivity than negativity to be expressed. Thus, the socially accepted beliefs have optimism bias.

How are “social norms … optimized for causing more positivity than negativity to be expressed” an example of “someone … believ[ing] that they themselves are less likely to experience a negative event”? What is the relationship of the one to the other, even?

As far as the happiness thing, this is really quite speculative and far from obvious, and while I don’t have much desire to argue about the functional role of happiness, etc., I would suggest that taking it to be an example of optimism bias (or indicative of a preference for having optimism bias, etc.) is ill-advised.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-15T22:55:01.296Z · LW(p) · GW(p)

It's hard to disentangle the belief that things are currently going well from the belief that things will go well in the future, as present circumstances cause future circumstances. In general, a bias towards thinking things are going well right now, will cause a bias towards thinking things are going to go well in the future.

If someone is building a ship, and someone criticizes the ship for being unsafe, but this criticism is suppressed, that would result in optimism bias at a social scale, since it leads people to falsely believe the ship is safer than it actually is.

If I'm actually worried about getting fired, but answer "well" to "how are you doing", then that would result in optimism bias on a social scale, since the socially accepted belief is falsely implying I'm not worried and my job is stable.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2019-07-16T00:59:53.894Z · LW(p) · GW(p)

If someone is building a ship, and someone criticizes the ship for being unsafe, but this criticism is suppressed, that would result in optimism bias at a social scale, since it leads people to falsely believe the ship is safer than it actually is.

This seems to assume that absent suppression of criticism, people's perceptions would be accurate.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-16T01:19:07.767Z · LW(p) · GW(p)

My view is that people make better judgments with more information, generally (but not literally always), but not that they always make accurate judgments when they have more information. Suppressing criticism but not praise, in particular, is a move to intentionally miscalibrate/deceive the audience.

comment by Raemon · 2019-07-15T22:28:51.266Z · LW(p) · GW(p)

I think there might be something similar going in the group optimism bias vs individual, but that this depends somewhat on whether you accept the multi-agent model [? · GW] of mind.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-15T22:30:27.049Z · LW(p) · GW(p)

In this case, I don't think so. In the parable, each vassal individually wants to maintain a positive impression. Additionally, vassals coordinate with each other to praise and not criticize each other (developing social norms such as almost always claiming things are going well). These are both serving the goal of each vassal maintaining a positive impression.

Replies from: Raemon
comment by Raemon · 2019-07-15T22:47:35.053Z · LW(p) · GW(p)

I think I'm asking the same question of Said of, "how is this the same phenomenon as someone saying "I'm fine", if not relying on [something akin to] the multi-agent model of mind? Otherwise it looks like it's built out of quite different parts, even if they have some metaphorical similarities.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-15T22:59:17.055Z · LW(p) · GW(p)

I am claiming something like a difference between implicit beliefs (which drive actions) and explicit narratives (which drive speech), and claiming that the explicit narratives are biased towards thinking things are going well.

This difference could be implemented through a combination of self-deception and other-deception. So it could result in people having explicit beliefs that are too optimistic, or explicitly lying in ways that result in the things said being too optimistic. (Self-deception might be considered an instance of a multi-agent theory of mind, but I don't think it has to be; the explicit beliefs may be a construct rather than an agent)

Replies from: Raemon
comment by Raemon · 2019-07-15T23:00:39.315Z · LW(p) · GW(p)

Hmm, okay that makes sense. [I think there might be other models for what's going on here but agree that this model is plausible and doesn't require the multi-agent model]

comment by clone of saturn · 2019-07-15T23:00:52.938Z · LW(p) · GW(p)

Po­lite­ness and pri­vacy are, in fact, largely about main­tain­ing im­pres­sions (es­pe­cially pos­i­tive im­pres­sions) through co­or­di­nat­ing against the rev­e­la­tion of truth.

People don't always agree with each other about what's good and bad. Knowingly allowing someone to get away with something bad makes you bad. Coordinating against the revelation of truth allows us to get something productive done together instead of spending all our time fighting.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-15T23:07:53.356Z · LW(p) · GW(p)

Knowingly allowing someone to get away with something bad makes you bad.

While some people have a belief like this, this seems false from a philosophical ethical perspective. E.g. even if eating meat is unethical (for you to do), that doesn't mean forcing everyone to not eat meat would be ethical, as such coercion would result in additional costs.

Ethics is often about trying to avoid destructive conflicts, so "punish every unethical thing" is actually pretty unethical.

(Note that coordinating against revelation of truth also means you're letting people get away with doing bad things, although in a hidden way)

Replies from: clone of saturn, SaidAchmiz
comment by clone of saturn · 2019-07-16T04:20:55.107Z · LW(p) · GW(p)

Know­ingly al­low­ing some­one to get away with some­thing bad makes you bad.

While some peo­ple have a be­lief like this, this seems false from a philo­soph­i­cal eth­i­cal per­spec­tive.

I think a philosophical ethical perspective that labels this "false" (and not just incomplete or under-nuanced) is failing to engage with the phenomenon of ethics as it actually happens in the world. Ethics arose in this cold and indifferent universe because being ethical is a winning strategy, but being "ethical" all by yourself without any mechanism to keep everyone around you ethical is not a winning strategy.

The cost of explicitly punishing people for not being vegetarian is prohibitive because vegetarianism is still a small and entrepreneurial ethical system, but you can certainly at least punish non-vegetarians by preferentially choosing other vegetarians to associate with. Well-established ethical systems like anti-murder-ism have much less difficulty affording severe punishments.

An important innovation is that you can cooperate with people who might be bad overall, as long as they follow a more minimal set of rules (for example, the Uniform Commercial Code). Or in other words, you can have concentric circles of ethicalness, making more limited ethical demands of people you interact with less closely. But when you interact with people in your outer circle, how do people in your inner circle know you don't condone all of the bad things they might be doing? One way is to have some kind of system of group membership, with rules that explicitly apply only to group members. But a cheaper and more flexible way is to simply remain ignorant about anything that isn't relevant--a.k.a respect their privacy.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-07-16T04:44:52.818Z · LW(p) · GW(p)

I don't think ethical vegetarians deal with this problem by literally remaining ignorant of what other people are eating, but rather there's a truce between ethical vegetarians and meat-eaters, involving politeness norms which make it impolite to call other people's dietary choices unethical.

I agree that at least soft rewards/punishments (such as people associating more with ethical vegetarians) are usually necessary to keep ethical principles incentive-compatible. (Since much of ethics is about finding opportunities for positive-sum trade while avoiding destructive conflicts, many such rewards come naturally)

comment by Said Achmiz (SaidAchmiz) · 2019-07-16T03:57:54.605Z · LW(p) · GW(p)

The important thing is that not only some but most people do, in fact, believe this. That is, I am fairly sure, what clone of saturn meant to convey—he was not claiming this is true, but alluding to the fact that it’s a commonly held view, and it is that social fact which makes it beneficial to maintain positive impressions in order to avoid counterproductive fighting.

comment by Gordon Seidoh Worley (gworley) · 2019-07-15T23:06:08.169Z · LW(p) · GW(p)

While reading this I kind of forgot you were talking about multiple agents and was thinking instead about subagents a la a multi-agent theory of mind. In this sense optimism can come up from the inside as parts of the mind claim higher confidence predictions and win out over lower confidence predictions, gradually replacing lower, perhaps accurate, confidence perceptions, beliefs, etc. with higher, perhaps inaccurate, ones. Then when you ask to know something about the world, you just get back the high confidence answers, have too much confidence yourself, and have to struggle to stay humble. The modern depression pandemic withstanding, this would seem to offer a possible explanation about why humans are generally over confident rather than under confident (or at least I think it's true that humans are more over than under confident, but I could be mistaken about this, but I except I am right given virtue training historically almost always included "humility" but rarely something like "confidence").

comment by Shmi (shminux) · 2019-07-15T21:49:54.481Z · LW(p) · GW(p)
This model raises an important question (with implications for the real world): if you're a detective in the kingdom of the gullible king who is at least somewhat aware of the reality of the situation and the distortonary dynamics, and you want to fix the situation (or at least reduce harm), what are your options?

I suspect that is not the first question to ask. In the spirit of Inadequate Equilibria, a better initial question would be, "Can you take advantage of the apparent irrationality of the situation?", and "What fraction of the population would have to cooperate to change things for the better?" and if there is no clear answer to either, then the situation is not as irrational as it seems, and the artificial optimism is, in fact, the best policy under the circumstances.

Replies from: philh
comment by philh · 2019-07-19T14:29:55.188Z · LW(p) · GW(p)

It's not really clear to me what it would mean for a situation to be rational or irrational; Jessica didn't use either of those words.

If the answers are "no" and "lots", doesn't that just mean you're in a bad Nash equilibrium? You seem to be advising "when caught in a prisoner's dilemma, optimal play is to defect", and I feel Jessica is more asking "how do we get out of this prisoner's dilemma?"

Replies from: shminux
comment by Shmi (shminux) · 2019-07-20T02:36:16.537Z · LW(p) · GW(p)

My point, as usual not well articulated, is that the question "how to fix things?" is way down the line. First, the apparent "distortonary dynamics" may only be an appearance of one. The situation described is a common if metastable equilibrium, and it is not to me whether it is "distortionary" or not. So, after the first impulse to "fix" the status quo passes, it's good to investigate it first. I didn't mean to suggest one *should* take advantage of the situation, merely to investigate if one *could*. Just like in one of Eliezer's examples, seeing a single overvalued house does not help you profit on it. And if there is indeed a way to do so, meaning the equilibrium is shallow enough, the next step would be to model the system as it climbs out of the current state and rolls down one of many possible other equilibria. Those other ones may be even worse, but the metric applied to evaluate the current state vs the imaginary ideal state. A few examples:

  • Most new businesses fail within 3 years, but without new aspired entrepreneurs having a too rosy estimate of their chances for success (cf the Optimism bias mentioned in the OP) there would be a lot fewer new businesses and everyone would be worse off in the long run.
  • Karl Marx was calling for the freedom of the working class through revolution, but any actual revolution makes things worse for everyone, including the working class, at least in the short to medium run (years to decades). If anything, history showed that incremental evolutionary advances work a lot better.
  • The discussed Potemkin Villages, in moderation, can be an emotional stimulus for people to try harder. In fact, a lot of the fake statistics in the former Soviet Union served that purpose.
comment by limerott · 2019-07-16T19:37:13.100Z · LW(p) · GW(p)

If the king is too gullible, the vassals have an economic incentive to abuse this through the various methods you described. This eventually leads to a permanent distortion of the truth. If an economic crisis hits the country, his lack of truthful information would prevent him to solve it. The crises would exacerbate, which, at some point, would propel the population to rebel and topple him. As a replacement, a less gullible king would be put into power. This looks like a control loop to me -- one that gets rid of too gullible kings.

So, my strategy as detective would be to wait until the situation gets worse and stage a coup.

I want to argue that a healthy dose of artificial optimism can be useful (by the way, isn't all optimism artificial? Otherwise, we would call it realism). This can be on a personal level: If you expect to have a good day, you are more likely to do things that will make your day good. Or, in your scenario, if a vassal whose region isn't going great starts to praise it and gets more resources this way, he can invest them into rebuilding it (although I question the policy of assigning more resources to those regions that are faring well anyway).

As a side note, this reminds me of the Great Leap Forward under Mao, which caused millions of death by starvation. The main reason was deceitful reporting: as the information about crops needed to be collected at the central authority and it had to go through multiple levels (farmers themselves, districts, cities, counties, states) and at each level, the representatives exaggerated the numbers slightly, this added up and eventually led the top party officials to believe that everything was going great when millions were dying. (Of course, this is a hierarchical structure of vassals, but it's still artificial optimism)

comment by Dagon · 2019-07-16T15:55:54.850Z · LW(p) · GW(p)

How do we know it's a problem? In a world where we can't make perfect (or even very good) point-predictions, and on most topics don't have the ability to formalize a model of probability space, what is the proper level of optimism (picking more pleasant examples for the inevitable availability heuristic we will experience)? Your important question at the end seems like the right one to be asking: how can a realist improve the situation?

And it starts with defining "improve". For a lot of cases, optimism is the only way to actually start any ambitious project - the realist option is to maintain the status quo, which is not clearly better than taking a risk of failure.

I often wonder if optimism bias is a cultural reaction to other biases, like loss aversion and the drive to conformity. If so, we'll need to address those at the same time, or we're moving AWAY from truth by removing only one side of the bias equilibrium.

comment by ESRogs · 2020-06-12T05:12:03.442Z · LW(p) · GW(p)

People often believe that it's inherently good to be happy, rather than thinking that their happiness level should track the actual state of affairs (and thus be a useful tool for emotional processing and communication). Why?

Isn't your happiness level one of the most important parts of the "actual state of affairs"? How would you measure the value of the actual state of affairs other than according to how it affects your (or others') happiness?

It seems to me that it is inherently good to be happy. All else equal, being happier is better.

That said, I agree that it's good to pay a cost in temporarily lower happiness (e.g. for emotional processing, etc) to achiever more happiness later. If that's all you mean -- that the optimal strategy allows for temporary unhappiness, and it's unwise to try to force yourself or others to be happy in all moments -- then I don't disagree.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2020-06-13T00:06:18.689Z · LW(p) · GW(p)

"Isn't the score I get in the game I'm playing one of the most important part of the 'actual state of affairs'? How would you measure the value of the actual state of affairs other than according to how it affects your (or others') scores?"

I'm not sure if this analogy is, by itself, convincing. But, it's suggestive, in that happiness is a simple, scalar-like thing, and it would be strange for such a simple thing to have a high degree of intrinsic value. Rather, on a broad perspective, it would seem that those things of most intrinsic value are those things that are computationally interesting, which can explore and cohere different sources of information, etc, rather than very simple scalars. (Of course, scalars can offer information about other things)

On an evolutionary account, why would it be fit for an organism care about a scalar quantity, except in that that quantity is correlated with the organism's fitness? It would seem that wireheading is a bug, from a design perspective.

Replies from: ESRogs
comment by ESRogs · 2020-06-13T04:12:42.839Z · LW(p) · GW(p)

I get the analogy. And I guess I'd agree that I value more complex positive emotions that are intertwined with the world more than sort of one note ones. (E.g. being on molly felt nice but kind of empty.)

But I don't think there's much intrinsic value in the world other than the experiences of sentient beings.

A cold and lifeless universe seems not that valuable. And if the universe has life I want those beings to be happy, all else equal. What do you want?

And regarding the evolutionary perspective, what do I care what's fit or not? My utility function is not inclusive genetic fitness.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2020-06-13T05:25:06.864Z · LW(p) · GW(p)

Experiences of sentient beings are valuable, but have to be "about" something to properly be experiences, rather than, say, imagination.

I would rather that conditions in the universe are good for the lifeforms, and that the lifeforms' emotions track the situation, such that the lifeforms are happy. But if the universe is bad, then it's better (IMO) for the lifeforms to be sad about that.

The issue with evolution is that it's a puzzle that evolution would create animals that try to wirehead themselves, it's not a moral argument against wireheading.

Replies from: ESRogs
comment by ESRogs · 2020-06-13T06:19:08.032Z · LW(p) · GW(p)

I would rather that conditions in the universe are good for the lifeforms


How do you measure this? What does it mean that conditions in the universe are good for the lifeforms other than that it gives them good experiences?

You're wanting to ground positive emotions in objectively good states. But I'm wanting to ground the goodness of states in the positive emotions they produce.

Perhaps there's some reflexivity here, where we both evaluate positive emotions based on how well they track reality, and we also evaluate reality on how much it produces positive emotions. But we need some way for it to bottom out.

For me, I would think positive emotions are more fundamentally good than universe states, so that seems like a safer place to ground the recursion. But I'm curious if you've got another view.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2020-06-13T06:49:05.078Z · LW(p) · GW(p)

I don't have a great theory here, but some pointers at non-hedonic values are:

  • "Wanting" as a separate thing from "liking"; what is planned/steered towards, versus what affective states are generated? See this. In a literal sense, people don't very much want to be happy.
  • It's common to speak in terms of "mental functions", e.g. perception and planning. The mind has a sort of "telos"/direction, which is not primarily towards maximizing happiness (if it were, we'd be happier); rather, the happiness signal has a function as part of the mind's functioning.
  • The desire to not be deceived, or to be correct, requires a correspondence between states of mind and objective states. To be deceived about, say, which mathematical results are true/interesting, means to explore a much more impoverished space of mathematical reasoning, than one could by having intact mathematical judgment.
  • Related to deception, social emotions are referential: they refer to other beings. The emotion can be present without the other beings existing, but this is a case of deception. Living in a simulation in which all apparent intelligent beings are actually (convincing) nonsentient robots seems undesirable.
  • Desire for variety. Having the same happy mind replicated everywhere is unsatisfying compared to having a diversity of mental states being explored. Perhaps you could erase your memory so you could re-experience the same great movie/art/whatever repeatedly, but would you want to?
  • Relatedly, the best art integrates positive and negative emotions. Having only positive emotions is like painting using only warm colors.

In epistemic matters we accept that beliefs about what is true may be wrong, in the sense that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. Similarly, we may accept that beliefs about the quality of one's experience may be wrong, in that they may be incoherent, incompatible with other information, fail to take into account certain hypotheses, etc. There has to be a starting point for investigation (as there is in epistemic matters), which might or might not be hedonic, but coherence criteria and so on will modify the starting point.

I suspect that some of my opinions here are influenced by certain meditative experiences that reduce the degree to which experiential valence seems important, in comparison to variety, coherence, and functionality.