The "a UBI would make people spend time in ways that made them feel miserable" argument has always felt a little odd to me. It's essentially claiming that
People who could get jobs where they feel good
will either quit their jobs or just not get jobs in the first place
and feel miserable as a result
but will nevertheless choose to stay unemployed
But if that's the case, why wouldn't those people just... recognize that they are feeling bad, so get jobs and feel better?
I think there's an implication of something like "they will be so badly addicted to lotuses that they can't get a job despite knowing that the lotuses just make them feel worse", but if their mental health is in that bad of a shape, how likely is it that they could or would get a fulfilling job anyway?
I certainly believe that there exists some percentage of the population for whom this combination of factors holds, but it seems hard to believe that they would be such a significant fraction that their loss of well-being would outweigh the increased well-being that others would get from the UBI.
(Now if the argument was something like "people on an UBI would stop working and be happier as a result, and it's morally wrong for non-working people to be happier when they are just living off the who people do work", that would be a different matter, but that's not the argument being made here.)
At least the most typical result of the UBI trials that have been conducted so far seems to be that they neither increase nor decrease the employment rate (though they sometimes get cancelled because people think they make people less likely to work).
Scholar's Stage would agree with the "does not hate you, nor does it love you" bit, but has a somewhat different take on it; the space of possible narratives would be limited even in the case that the writer didn't care about advertising revenue at all
[The] need to reduce reality to a simple mental model is an inherent feature of human cognition. For the most part it is done automatically without much thought. We cannot avoid simplification—we speak of London doing this or China doing that not because such simplifications are true (there is no unitary agent named “London” or “China” doing anything) but because it is impossible to act in a complex world without such short cuts.
The problems of journalism are the problems of cognition on steroids. For the journalist, historian, or social scientist, the drive to reduce is acute and explicit. On top of the normal simplification we all do unconsciously, nonfiction writers must reduce twice more: The first round of reduction comes with investigation. Any subject is too large to be understood in toto.
The investigator must decide where to focus her efforts, how to spend limited time, what sources to consult, what questions to ask, and what sort of evidence to be on the lookout for. Many of these things are not explicitly decided, but are forced upon the investigator by the nature of her tools and sources or by her preconceived sense of what is notable and what is not.
The second round of simplification, just as inherent to the journalistic enterprise as the first, is built into act of writing. The investigator has collected in her brain more that can ever be put on a page. Journalists in particular must condense what they have learned onto a very small space. This double reduction process is often described as “framing” a story. Reducing an entire movement—the histories, controversies, disagreements, defeats, glories, and quirks of thousands of unique individuals—to one comprehensible frame will always cut important things out. It is inevitable that some members of the covered group will be dissatisfied with the frame they have been forced into.
This process, far more than any explicit ideological agenda, is the source of most bias in journalism. This source of bias cannot be escaped. Stories without a frame are just an incoherent collection of facts too long and too varied to fit on a page. The bias imposed by framing is necessary—and sometimes even a good thing. [...]
The trouble comes when attachment to a given frame leads journalists into misperceiving their subjects, forcing them into a framework that does not really fit them. If you are primed to think of internet subcultures through the gamergate frame, gamergate is all you will ever find. In the terminology of the rationalists, it is a problem of “priors.” All that was required for a mess like this was a writer with wildly different priors and tight time demands to come into contact with a community they only had a superficial understanding of. No active malice is necessary.
And while he does note that the NYT explicitly wants particular narratives, he also mentions that the incentives involved are a bit more complicated than just going after advertising revenue:
This idea that the New York Times published things for “the clicks” is common but inaccurate. Vox publishes things for the clicks. The New York Times, like other top tier publications such as The New Yorker or the Washington Post, make their money from subscriptions and side services--like that high school trip to Peru that got a certain Times reporter fired. The New York Times is rolling in dough, and that dough has nothing to do with the virality of any given article. In fact, there is a good chance that uber-viral articles cost them more readers than they gain from them. No one subscribed because of 1619 or the Cotton op-ed, but a lot of people did unsubscribed because of them!
Likewise most writers, regardless of publication, care very little about their hit count. At most publications individual writers are not even told site traffic stats for individual pieces. Only in rare cases is payment tied to popularity. What motivates writers and journalists is not clicks but prestige. They measure their self worth through the esteem of their fellow writers, and write to that end. For more on this see my post "Why Writers (And Think Tankers) Feud So Viciously."
I was reminded of this anecdote of how the US Marines train agency. The part about encouragement from superiors in areas where the recruits are the weakest, sounds somewhat similar to your theory of mysterious old wizards.
"... some people’s sense of self-determination gets suppressed by how they grow up, or experiences they’ve had, and they forget how much influence they can have on their own lives."
“That’s when training is helpful, because if you put people in situations where they can practice feeling in control, where that internal locus of control is reawakened, then people can start building habits that make them feel like they’re in charge of their own lives—and the more they feel that way, the more they really are in control of themselves.”
For [Marine commandant Charles] Krulak, studies like this seemed to hold the key to teaching recruits self-motivation. If he could redesign basic training to force trainees to take control of their own choices, that impulse might become more automatic, he hoped. “Today we call it teaching ‘a bias toward action,’” Krulak told me. “The idea is that once recruits have taken control of a few situations, they start to learn how good it feels.
“We never tell anyone they’re a natural-born leader. ‘Natural born’ means it’s outside your control,” Krulak said. “Instead, we teach them that leadership is learned, it’s the product of effort. We push recruits to experience that thrill of taking control, of feeling the rush of being in charge. Once we get them addicted to that, they’re hooked.”
For [fresh recruit] Quintanilla, this tutorial started as soon as he arrived. Initially, there were long days of forced marches, endless sit-ups and push-ups, and tedious rifle drills. Instructors screamed at him constantly. (“We’ve got an image to uphold,” Krulak told me.) But alongside those exercises, Quintanilla also confronted a steady stream of situations that forced him to make decisions and take control.
In his fourth week of training, for instance, Quintanilla’s platoon was told to clean the mess hall. The recruits had no idea how. They didn’t know where the cleaning supplies were located or how the industrial dishwasher worked. Lunch had just ended and they weren’t sure if they were supposed to wrap the leftovers or throw them away. Whenever someone approached a drill instructor for advice, all he received was a scowl. So the platoon began making choices. The potato salad got tossed, the leftover hamburgers went into the fridge, and the dishwasher was loaded with so much detergent that suds soon covered the floor. It took three and a half hours, including the time spent mopping up the bubbles, for the platoon to finish cleaning the mess hall. They mistakenly threw away edible food, accidentally turned off the ice cream freezer, and somehow managed to misplace two dozen forks. When they were done, however, their drill instructor approached the smallest, shyest member of the platoon and said he had noticed how the recruit had asserted himself when a decision was needed on where to put the ketchup. In truth, it was pretty obvious where the ketchup should have gone. There was a huge set of shelves containing nothing but ketchup bottles. But the shy recruit beamed as he was praised.
“I hand out a number of compliments, and all of them are designed to be unexpected,” said Sergeant Dennis Joy, a thoroughly intimidating drill instructor who showed me around the Recruit Depot one day. “You’ll never get rewarded for doing what’s easy for you. If you’re an athlete, I’ll never compliment you on a good run. Only the small guy gets congratulated for running fast. Only the shy guy gets recognized for stepping into a leadership role. We praise people for doing things that are hard. That’s how they learn to believe they can do them.”
Digging into research by the Marine Corps (and later work done by psychologists and psychiatrists), Krulak discovered that interior locus of control was a huge predictor of self-motivation and success.
Locus of control comes in two flavors:
• With an interior locus of control, you believe that the events in your life are the result of your actions.
• With an exterior locus of control, you believe that the events in your life are the result of outside forces.
My experience of attending a CFAR workshop was that Mysterious Old Wizardness was its primary function. Yes, there were various skills and techniques taught, but that they were tools for making you see that hey, you can actually improve your life and Think Big.
Though maybe that was just the message that happened to resonate the most for me, whereas others had different takeaways? Curious to hear thoughts about that.
This was an interesting read, because my first thought was "the problem-oriented mode isn't a third mode, it's just what the advice mode looks like when it's done right"... until I came across your list of issues and thought that huh, I guess that it is a distinct mode after all.
Come to think of it, the problem-oriented mode seems similar to coaching, and coaching manuals do explicitly say that coaching is not about offering solutions, but rather it's about asking clarifying questions that help the other person figure out a solution themselves. So then the modes might reflect varying points in a space with three dimensions: amount of support, amount of offered solutions, and amount of clarifying questions.
What you're calling a problem-oriented mode sounds like it's closer to coaching on the "clarifying questions" dimension than the two others are, but given that it's still aiming to eventually involve proposing solutions, it's not totally that.
Technically this breaks the rule of "Same level of technology as today" but I feel it should count on the grounds that I am personally inventing this technology using off-the-shelf infrastructure.
Yeah I think that "something that's perfectly doable with today's technology without requiring novel breakthroughs, nobody has just bothered putting that particular application together" is within the spirit of "same level of technology".
As a counterpoint, one writer thinks that it's psychologically harder for organizations to think about PR:
A famous investigative reporter once asked me why my corporate clients were so terrible at defending themselves during controversy. I explained, “It’s not what they do. Companies make and sell stuff. They don’t fight critics for a living. And they dread the very idea of a fight. Critics criticize; it’s their entire purpose for existing; it’s what they do.”
"But the companies have all that money!” he said, exasperated.
"But their critics have you,” I said.
The conversation ended.
My point was that companies are so psychologically traumatized by the very prospect of controversy that many of the battles they may face are over before they begin. This mindset has four pillars: denial, avoidance, surrender, and expedience. It also has a basis in functional reality. In addition to the drain on financial resources, companies don’t have all day to sit around fighting issue-warriors and the “bathrobe brigade,” the diffuse army of millions who wage war on the world from their kitchen table laptops at no cost. Their critics are able to make decisions about prosecuting attacks in a fraction of the time it takes big organizations to figure out how to respond or whether to respond at all.
Companies are simply not set up to manage crises either mechanically or constitutionally, whereas their adversaries are. Corporate and institutional critics have passion, will, and the cloak of virtue. They want the attack to remain in perpetuity. Their targets, conversely, have a different mindset: They are motivated by institutional tranquility—they want the enterprise to keep humming along, quietly paying dividends and maintaining job security.
Despite the prevalence of corporate sales meetings that traffic in the conceit that executives are barrier-busting rebels, most corporate people find fights with issue-warriors to be distressing on a personal level and resist participating. This can be because of a basic sympathy with the critics’ positions, concern about doing anything that could escalate tensions, or a fear of the career consequences of being in the line of fire. I have been in hundreds of meetings and on phone calls with large organizations under siege, and the prevailing theme of these sessions is JUST MAKE IT STOP. Put differently, it is in no one’s self-interest to make a broader organizational challenge one’s own personal jihad, to try to preserve the organization more than it cares to preserve itself.
When it is under attack, an institution is little more than a collection of individuals angling for self-preservation. No one’s mental framework includes a career arc that places them in the middle of a Fiasco Vortex during a climate when there are dozens of data points that will be leaked or otherwise surface in discovery or depositions. One corporate client likened being on a crisis management team to being a character in William Golding’s Lord of the Flies, never knowing which of his colleagues may end up killing him. His corporate enemies were gently nudging him into the spotlight hoping that if he became the face of the crisis, he, not they, would take the fall.
They have some suggestions of how to do that in the episode; one is just exhibiting behaviors that don't fit the idealized image they want to project on you. (Taft: "It's remarkably easy to break, at least for a little while, by just - you know - picking your nose or swearing or something. And if I notice someone doing [the idealization] - because you can tell when it's happening - I just keep breaking it and breaking it and breaking it until it breaks, and then probably they'll go away at that point if that was their goal, you know, 'he was not who I thought he was' and then they lose interest. But if they stick around after that, then they are probably seeing me quite a bit more for who I am.")
Another thing that he mentions is that while you do want to maintain boundaries - don't let crazy people call you at 3 AM - it's also good if you can reduce distance and let people in close. If people stay distant and never meet you, then it's easy to continue idealizing you, whereas meeting you in person makes it easier for them to see who you actually are. He used to invite anyone who was interested into his living room for his meditation class, and "while that was probably too much", he says it was good for getting that distance down.
Just happened to notice an interesting paper on another cultural difference: in the US, children who have better self-control tend to believe more strongly in free will; in China, Singapore and Peru, self-control and belief in free will are not correlated.
The authors hypothesize that this is because of different cultural models about the nature of behavior: US culture explains self-control as a property of the individual, whereas the culture in the other countries explains it as a property of the social context the individual is in. As a result, when US children successfully practice self-control, they see it as affirming the existence of free will, whereas when children in other countries practice self-control, they see it as affirming the effect of their context on their behavior.
Four-year-old children in the U.S. generally say that if a person “really wants” to do something – play a fun game, for example – she has to do it (cannot choose not to do it). Similarly, 4-year-olds say if someone “really doesn't want” to do something – e.g. look in a scary closet – they cannot choose to do it. (Kushnir et al., 2015; Wente et al., 2016). Six-year-olds and older children in the U.S. are more optimistic about their own and others' ability to perform undesirable actions and to inhibit desirable ones – they have a conception more like the classic Western notion of absolute free will. [...]
Children in different cultures grow up surrounded by these different folk-psychological theories of mind and self. For example, children growing up in middle-class North American cultures are raised by adults who often view intentional actions as stemming from individual desires, preferences, and subjective mental states. Children growing up in Asian cultures are raised by adults who more frequently view agents as responding to situations, social roles, and the expectations of other individuals. Of course, all children across these cultural contexts develop understandings of individual minds and mental states and learn the importance of social roles and expectations. However, culture plays a role in how we emphasize and weigh these different factors in ordinary causal-explanatory reasoning about actions (Morris & Peng, 1994). Culture also plays a role in how we talk to children about actions: a large body of work shows that, in conversations with children about events and experiences, parents in individualistic versus collectivistic cultural contexts consistently emphasize individual mental states versus relational roles and social expectations respectively, and through these conversations transmit different cultural views on agency and self to their children (Wang, 2006; Wang & Leichtman, 2000). It is therefore conceivable that, even for children as young as four, self-control experience could be interpreted through a cultural lens. [...]
The results from Study 1 show a culturally moderated relationship between self-control abilities and beliefs about the “free will” to act against or inhibit strong desires. Though we found similar self-control performance across the three cultures, Singaporean children reported weaker belief in the free will than Chinese and U.S. children. These findings on their own indicate that free will beliefs and self-control abilities do not necessarily align, at least when contrasting samples across cultures. Second, controlling for age, U.S. children who held a stronger belief in their ability to act against or inhibit strong desires performed better on tasks requiring self-control. No such correlations were observed in the two East Asian cultures, again despite overall similarities in the main developmental trajectory of both their self-control abilities and their free will beliefs. [...]
One explanation for the culturally moderated link between beliefs and abilities is that it reflects both cultural and experiential influences. If children in our U.S. sample experience their self-control as internally guided, then they may interpret their first-person experiences in situations that require self-control as evidence in support of the possibility that they can successfully control impulses and desires at will. As they get older, they more readily endorse the idea that agents have the free will to act against their desires because they actually see themselves get better at practicing self-control. On the other hand, children from Singapore and China may experience their self-control as externally guided, caused by social norms or external influences without the intermediary influence of an internal “will”. In that case, the experience of self-control might have no effect on beliefs about free will. [...]
These data suggest that the Western causal-explanatory framework– which include an emphasis on internal mental states – frame children's experience of their own self-control. As further support for this idea, we found some indication that the culturally-moderated link has a causal basis. In Study 2, self-control behaviors influenced children's free will beliefs, at least in the short-term. In particular, children who failed two self-control tasks had a lower belief in free will compared to children who completed one or both self-control tasks successfully. We did not find a causal influence in the opposite direction, suggesting that improving or depleting self-control in the short term involves more than simply affirming that one believes it is possible. [...]
Our results imply that developing beliefs about internal struggles between desire and “will” are only one of many possible cultural models for action understanding. U.S. children are socialized to connect their emerging understanding of desires – how they operate, how they conflict and how they can be overridden – with the struggles of the will. Thus, they may naturally interpret the experience of self-control as an internal struggle of conflicting desires, and learn to attribute self-control performance to an act of will. Children in Singapore, China, (and perhaps Peru) may be learning to view the same struggle in the same type of self-control task through a different attributional framework. Speculatively, experiences of successful and failed self-control experiences might lead to attributions about norm compliance, without necessarily invoking internal “will” as an intermediary. In China, for example, parents place a strong emphasis on consequences towards others (e.g. family members) and group norm-following as causal explanations for actions (Wang, 2006; Yau & Smetana, 2003). In Singapore, children reference punishment for norm violations as an explanation for why norms must necessarily limit the possibility of acting on desires (Chernyak et al., 2019). Thus, for children in these cultures, beliefs that matter most for self-regulation, and thus the beliefs that are most influenced by evidence from self-control success and/or failure, may be those that govern the extent to which social norms constrain personal autonomy. With regards to Peruvian children, our findings here are necessarily preliminary. More background is needed about socio-cognitive development of young children in Peru and how it connects to the transmission of cultural values. Thus, the variety of causal-explanatory frameworks, and how they emerge in development, and how (or whether) they connect to children's self-control remain open questions for future research.
Appreciate this post! I had seen the good regulator theorem referenced every now and then, but wasn't sure what exactly the relevant claims were, and wouldn't have known how to go through the original proof myself. This is helpful.
(E.g. the result was cited by Frith & Metzinger as part of their argument that, as an agent seeks to avoid being punished by society, this constitutes an attempt to regulate society's behavior; and for the regulation be successful, the agent needs to internalize a model of the society's preferences, which once internalized becomes something like a subagent which then regulates the agent in turn and causes behaviors such as self-punishment. It sounds like the math of the theorem isn't very strongly relevant for that particular argument, though some form of the overall argument still sounds plausible to me regardless.)
Westerners classify things according to the how much individual members of a class resemble each other, and East Asians according to the relationships between the classes:
For the Greeks, things belonged in the same category if they were describable by the same attributes. But the philosopher Donald Munro points out that, for the Chinese, shared attributes did not establish shared class membership. Instead, things were classed together because they were thought to influence one another through resonance. For example, in the Chinese system of the Five Processes, the categories spring, east, wood, wind, and green all influenced one another. Change in wind would affect all the others—in “a process like a multiple echo, without physical contact coming between any of them.” Philosopher David Moser also notes that it was similarity between classes, not similarity among individual members of the same class, that was of interest to the ancient Chinese. They were simply not concerned about the relationship between a member of a class (“a horse”) and the class as a whole (“horses”). [...]
Take a look at the three objects pictured in the illustration on page 141. If you were to place two objects together, which would they be? Why do those seem to be the ones that belong together?
If you’re a Westerner, odds are you think the chicken and the cow belong together. Developmental psychologist Liang-hwang Chiu showed triplets like that in the illustration to American and Chinese children. Chiu found that the American children preferred to group objects because they belonged to the “taxonomic” category, that is, the same classification term could be applied to both (“adults,” “tools”). Chinese children preferred to group objects on the basis of relationships. They would be more likely to say the cow and the grass in the illustration go together because “the cow eats the grass.”
Li-jun Ji, Zhiyong Zhang, and I obtained similar results comparing college students from the U.S with students from mainland China and Taiwan, using words instead of pictures. We presented participants with sets of three words (e.g., panda, monkey, banana) and asked them to indicate which two of the three were most closely related. The American participants showed a marked preference for grouping on the basis of common category membership: Panda and monkey fit into the animal category. The Chinese participants showed a preference for grouping on the basis of thematic relationships (e.g., monkey and banana) and justified their answers in terms of relationships: Monkeys eat bananas.
Live in a different culture for long enough, and you can't help but to have it influence your thinking:
Of course, Easterners are constantly being “primed” with interdependence cues and Westerners with independence cues. This raises the possibility that even if their upbringing had not made them inclined in one direction or another, the cues that surround them would make people living in interdependent societies behave in generally interdependent ways and those living in independent societies behave in generally independent ways. In fact this is a common report of people who live in the “other” culture for a while. My favorite example concerns a young Canadian psychologist who lived for several years in Japan. He then applied for jobs at North American universities. His adviser was horrified to discover that his letter began with apologies about his unworthiness for the jobs in question.
Debate is almost as uncommon in modern Asia as in ancient China. In fact, the whole rhetoric of argumentation that is second nature to Westerners is largely absent in Asia. North Americans begin to express opinions and justify them as early as the show-and-tell sessions of nursery school (“This is my robot; he’s fun to play with because …”). In contrast, there is not much argumentation or trafficking in opinions in Asian life. A Japanese friend has told me that the concept of a “lively discussion” does not exist in Japan—because of the risk to group harmony. It is this fact that likely undermined an attempt he once made to have an American-style dinner party in Japan, inviting only Japanese guests who expressed a fondness for the institution—from the martinis through the steak to the apple pie. The effort fell flat for want of opinions and people willing to defend them.
The absence of a tradition of debate has particularly dramatic implications for the conduct of political life. Very recently, South Korea installed its first democratic government. Prior to that, it had been illegal to discuss North Korea. Westerners find this hard to comprehend, inasmuch as South Korea has performed one of the world’s most impressive economic miracles of the past 40 years and North Korea is a failed state in every respect. But, due to the absence of a tradition of debate, Koreans have no faith that correct ideas will win in the marketplace of ideas, and previous governments “protected” their citizens by preventing discussion of Communist ideas and North Korean practices.
The tradition of debate goes hand in hand with a certain style of rhetoric in the law and in science. The rhetoric of scientific papers consists of an overview of the ideas to be considered, a description of the relevant basic theories, a specific hypothesis, a statement of the methods and justification of them, a presentation of the evidence produced by the methods, an argument as to why the evidence supports the hypothesis, a refutation of possible counterarguments, a reference back to the basic theory, and a comment on the larger territory of which the article is a part. For Americans, this rhetoric is constructed bit by bit from nursery school through college. By the time they are graduate students, it is second nature. But for the most part, the rhetoric is new to the Asian student and learning it can be a slow and painful process. It is not uncommon for American science professors to be impressed by their hard-working, highly selected Asian students and then to be disappointed by their first major paper—not because of their incomplete command of English, but because of their lack of mastery of the rhetoric common in the professor’s field. In my experience, it is also not uncommon for professors to fail to recognize that it is the lack of the Western rhetoric style they are objecting to, rather than some deeper lack of comprehension of the enterprise they’re engaged in.
The combative, rhetorical form is also absent from Asian law. In Asia the law does not consist, as it does in the West for the most part, of a contest between opponents. More typically, the disputants take their case to a middleman whose goal is not fairness but animosity reduction—by seeking a Middle Way through the claims of the opponents. There is no attempt to derive a resolution to a legal conflict from a universal principle. On the contrary, Asians are likely to consider justice in the abstract, by-the-book Western sense to be rigid and unfeeling.
Negotiation also has a different character in the high-context societies of the East than in the low-context societies of the West. Political scientist Mushakoji Kinhide characterizes the Western erabi (active, agentic) style as being grounded in the belief that “man can freely manipulate his environment for his own purposes. This view implies a behavioral sequence whereby a person sets his objective, develops a plan designed to reach that objective, and then acts to change the environment in accordance with that plan.” To a person having such a style, there’s not much point in concentrating on relationships. It’s the results that count. Proposals and decisions tend to be of the either/or variety because the Westerner knows what he wants and has a clear idea what it is appropriate to give and to take in order to have an acceptable deal. Negotiations should be short and to the point, so as not to waste time reaching the goal.
The Japanese awase (harmonious, fitting-in) style, “rejects the idea that man can manipulate the environment and assumes instead that he adjusts himself to it.” Negotiations are not thought of as “ballistic,” one-shot efforts never to be revisited, and relationships are presumed to be long-term. Either/or choices are avoided. There is a belief that “short-term wisdom may be long-term folly.” A Japanese negotiator may yield more in negotiations for a first deal than a similarly placed Westerner might, expecting that this will lay the groundwork for future trust and cooperation. Issues are presumed to be complex, subjective, and intertwined, unlike the simplicity, objectivity, and “fragmentability” that the American with the erabi style assumes.
Independent-minded Western culture, versus interdependence-minded East Asian culture:
Training for independence or interdependence starts quite literally in the crib. Whereas it is common for American babies to sleep in a bed separate from their parents, or even in a separate room, this is rare for East Asian babies—and, for that matter, babies pretty much everywhere else. Instead, sleeping in the same bed is far more common. The differences are intensified in waking life. Adoring adults from several generations often surround the Chinese baby (even before the one-child policy began producing “little emperors”). The Japanese baby is almost always with its mother. The close association with mother is a condition that some Japanese apparently would like to continue indefinitely. Investigators at the University of Michigan’s Institute for Social Research recently conducted a study requiring a scale comparing the degree to which adult Japanese and American respondents want to be with their mothers. The task proved very difficult, because the Japanese investigators insisted that a reasonable endpoint on the scale would be “I want to be with my mother almost all the time.” The Americans, of course, insisted that this would be uproariously funny to American respondents and would cause them to cease taking the interview seriously. [...]
An emphasis on relationships encourages a concern with the feelings of others. When American mothers play with their toddlers, they tend to ask questions about objects and supply information about them. But when Japanese mothers play with their toddlers, their questions are more likely to concern feelings. Japanese mothers are particularly likely to use feeling-related words when their children misbehave: “The farmer feels bad if you did not eat everything your mom cooked for you.” “The toy is crying because you threw it.” “The wall says ‘ouch.’” Concentrating attention on objects, as American parents tend to do, helps to prepare children for a world in which they are expected to act independently. Focusing on feelings and social relations, as Asian parents tend to do, helps children to anticipate the reactions of other people with whom they will have to coordinate their behavior.
The consequences of this differential focus on the emotional states of others can be seen in adulthood. There is evidence that Asians are more accurately aware of the feelings and attitudes of others than are Westerners. For example, Jeffrey Sanchez-Burks and his colleagues showed to Koreans and Americans evaluations that employers had made on rating scales. The Koreans were better able to infer from the ratings just what the employers felt about their employees than were the Americans, who tended to simply take the ratings at face value. This focus on others’ emotions extends even to perceptions of the animal world. Taka Masuda and I showed underwater video scenes to Japanese and American students and asked them to report what they saw. The Japanese students reported “seeing” more feelings and motivations on the part of fish than did Americans; for example, “The red fish must be angry because its scales were hurt.” Similarly, Kaiping Peng and Phoebe Ellsworth showed Chinese and American students animated pictures of fish moving in various patterns in relation to one another. For example, a group might appear to chase an individual fish or to scoot away when the individual fish approached. The investigators asked the students what both the individual fish and the groups of fish were feeling. The Chinese readily complied with the requests. The Americans had difficulty with both tasks and were literally baffled when asked to report what the group emotions might be.
The relative degree of sensitivity to others’ emotions is reflected in tacit assumptions about the nature of communication. Westerners teach their children to communicate their ideas clearly and to adopt a “transmitter” orientation, that is, the speaker is responsible for uttering sentences that can be clearly understood by the hearer—and understood, in fact, more or less independently of the context. It’s the speaker’s fault if there is a miscommunication.
Western thought that emphasizes individuals detached of context, versus East Asian thought that emphasizes relationships and contexts:
Most Americans over a certain age well remember their primer, called Dick and Jane. Dick and Jane and their dog, Spot, were quite the active individualists. The first page of an early edition from the 1930s (the primer was widely used until the 1960s) depicts a little boy running across a lawn. The first sentences are “See Dick run. See Dick play. See Dick run and play.” This would seem the most natural sort of basic information to convey about kids—to the Western mentality. But the first page of the Chinese primer of the same era shows a little boy sitting on the shoulders of a bigger boy. “Big brother takes care of little brother. Big brother loves little brother. Little brother loves big brother.” It is not individual action but relationships between people that seem important to convey in a child’s first encounter with the printed word. [...]
“Tell me about yourself” seems a straightforward enough question to ask of someone, but the kind of answer you get very much depends on what society you ask it in. North Americans will tell you about their personality traits (“friendly, hard-working”), role categories (“teacher,” “I work for a company that makes microchips”), and activities (“I go camping a lot”). Americans don’t condition their self-descriptions much on context. The Chinese, Japanese, and Korean self, on the other hand, very much depends on context (“I am serious at work”; “I am fun-loving with my friends”). A study asking Japanese and Americans to describe themselves either in particular contexts or without specifying a particular kind of situation showed that Japanese found it very difficult to describe themselves without specifying a particular kind of situation—at work, at home, with friends, etc. Americans, in contrast, tended to be stumped when the investigator specified a context—“I am what I am.” When describing themselves, Asians make reference to social roles (“I am Joan’s friend”) to a much greater extent than Americans do. Another study found that twice as many Japanese as American self-descriptions referred to other people (“I cook dinner with my sister”).
To rephrase what I took to be the essential gist: if you go from meeting 0 people to meeting 1 person, you are only exposing yourself to (some) additional risk. But if you are regularly seeing 99 people, then by adding on a hundredth person, you are now exposing all of those 99 to potentially getting infected from the hundredth.
Thus going from 99 to 100 is worse than going from 0 to 1, since 0 -> 1 exposes no people other than yourself to added risk, whereas 99 -> 100 exposes 99 other people than yourself to added risk.
That means that the content of your consciousness may be something like:
Time 1: The sight of a bird outside the window. [first-level nen]
Time 2: The thought “there’s a bird over there”. [second-level nen]
Time 3: The experience of typing on a keyboard. [first-level nen]
Time 4: The sound of a car outside. [first-level nen]
Time 5: A mental image of a car. [this could be first- or second-level I think]
Time 6: A sense of being someone who sees the bird and hears the car, while typing on a keyboard. [third-level nen]
… that is, normally you may experience there being a constant, permanent self which feels like what you really are. But in fact, during a large part of your conscious experience, that sense of self may simply not be there at all. Normally this might be impossible to detect due to what’s called the refrigerator light illusion: the light in a refrigerator turns on whenever you open the door, so it seems to you to always be on. Likewise, whenever you ask “do I experience a sense of self right now”, that question references and activates a self-schema, meaning that the answer is always “yes”. It is only by developing introspective awareness that records all mental content, without needing to make reference to a self, that you can come to notice the way in which your self constantly appears and disappears.
Rephrasing that quoted last paragraph a bit, it's saying that much of our consciousness consists of first- or second-level nens, but that we assume there to be a durable self which witnesses all experience (i.e. that our consciousness is all third-level nens); and that the existence of the lower-level nens can be difficult to notice because the mental motion of asking "what am I experiencing right now" intrinsically invokes a third-level nen. (But if the mind is quick enough in doing something like noting practice, it might start to notice that the nen that was returned in response to that query is not actually the nen of which the question was asking, and that the mental objects that the "response nens" are referring to, keep slipping away just a little too soon to be observed...)
I liked this article and upvoted, though I think that it could benefit from more examples, e.g. this paragraph was one where I found myself interested to read some:
Oftentimes, when we acknowledge our shadow values, we find that our strategies for getting them were horrible because we never actually examined them! Sometimes, we find that we don't actually know how to get our needs met, and need to spend some time thinking about them.
I think of the human brain as primarily performing the activity of minimizing prediction errors. That's not literally all it does in that "prediction error" is a weird way to talk about what happens in feedback loops where the "prediction" is some fixed setpoint not readily subject to update based on learning information (e.g. setpoints for things related to survival like eating enough calories). In this model we're maximally content when there is literally no prediction error.
Even assuming that this is true, why does it need to be the most important level of abstraction to consider?
Certainly there are various mechanisms built on top of predictive processing but there seem to be different mechanisms operating on roughly the same level. Even if there weren't, you could go some abstraction levels lower and say that the brain is attempting to e.g. maintain a particular balance of chemicals within the skull (whatever combination of chemicals is necessary to keep it alive), or to just follow the laws of physics. Or you could go some levels higher and say that some complicated set of social motivations is what the brain is primarily doing. Etc.
It doesn't seem obviously wrong to me to say that the brain is primarily performing the activity of minimizing prediction errors, but it also seems not-wrong to me to say that the brain is primarily performing any number of other tasks.
Great post! I was used to hearing "outsource/delegate everything" as the standard wisdom for basically all personal-type things and hadn't really encountered many plausible counterarguments. This post provided quite a few!
I'm not sure in which category you would put it, but as a counterpoint, Team Cohesion and Exclusionary Egalitarianism argues that for some groups, exclusion is at least partially essential and that they are better off for it:
... you find this pattern across nearly all elite American Special Forces type units — (1) an exceedingly difficult bar to get in, followed by (2) incredibly loose, informal, collegial norms with nearly-infinitely less emphasis on hierarchy and bureaucracy compared to all other military units.
To even "try out" for a Special Forces group like Delta Force or the Navy SEAL Teams, you have to be among the most dedicated, most physically fit, and most competent of soldier.
Then, the selection procedures are incredibly intense — only around 10% of those who attend selection actually make the cut.
This is, of course, exclusionary.
But then, seemingly paradoxically, these organizations run with far less hierarchy, formal authority, and traditional military decorum than the norm. They run... far more egalitarian than other traditional military unit. [...]
Going back [...] [If we search out the root causes of "perpetual bickering" within many well-meaning volunteer organizations] we can find a few right away —
*When there's low standards of trust among a team, people tend to advocate more strongly for their own preferences. There's less confidence on an individual level that one's own goals and preferences will be reached if not strongly advocated for.
*Ideas — especially new ideas — are notoriously difficult to evaluate. When there's been no objective standard of performance set and achieved by people who are working on strategy and doctrine, you don't know who has the ability to actually implement their ideas and see them through to conclusion.
*Generally at the idea phase, people are maximally excited and engaged. People are often unable to model themselves to know how they'll perform when the enthusiasm wears off.
*In the absence of previously demonstrated competence, people might want to show they're fit for a leadership role or key role in decisionmaking early, and might want to (perhaps subconsciously) demonstrate prowess at making good arguments, appearing smart and erudite, etc.
And of course, many more issues.
Once again, this is often resolved by hierarchy — X person is in charge. In the absence of everyone agreeing, we'll do what X says to do. Because it's better than the alternative.
But the tradeoffs of hierarchical organizations are well-known, and hierarchical leadership seems like a fit for some domains far moreso than others.
On the other end of the spectrum, it's easy when being egalitarian to not actually have decisions get made and fail to have valuable work getting done. For all the flaws of hierarchical leadership, it does tend to resolve the "perpetual bickering" problem.
From both personal experience and a pretty deep immersion into the history of successful organizations, it looks like often an answer is an incredibly high bar to joining followed by largely decentralized, collaborative, egalitarian decisionmaking.
This link seems to give a similar list, though it's divided into frontpage and personal sections rather than aggregating both into one like your list does. (I would probably prefer the combined version for a weekly digest.)
People can tell which strategy you are using, and usually the things that are ‘bad to show’ are bad for you to show, but other people would be perfectly interested to see them. So it is less cooperative, and people may respond to that, which may on a longer term view be bad for you.
Related to this: if you are hiding things, then people may detect that you are hiding them, but not what you are hiding. As a result, they can't tell whether you are hiding something relatively innocuous or something worse, and may intuitively trust you less. If your strategy tends towards revealing things, then they might see the bad things but be overall convinced that you're not hiding anything that would be even worse.
Some people will likely also find your bad sides relatable.
At the same time, sharing too much too quickly can also send a negative signal, as it may suggest a lack of discernment about social norms.
I haven't had this feeling; to me the world might feel less mad now than it used to, but that's probably more of a function of "Kaj coming to understand the internal logic in the actions that previously felt mad" than any real change in the world itself.
I also haven't noticed more people having the world-madness feeling now than before, though I feel like a lot of people have always had that feeling, so I expect that I wouldn't notice a large increase even if one did exist.
My experience with ideas related to this (e.g. Replacing Guilt, IFS) has been that I tend not to be able to muster compassion and understanding for whatever part of myself is putting up resistance. Rather, I just get frustrated with it for being so obviously wrong and irrational.
I think this is one of the situations where it really helps to have someone else facilitate your IFS session. What you describe often happens because you are blended with the part that wants to just "get rid" of the part creating the resistance, and it might be the anti-procrastination part which created your motivation to sit down for an IFS session in the fist place. Then you get an arguments are soldiers thing - if you were to actually listen to the procrastinating part, then it might turn out to have some good reason for procrastinating, and the anti-procrastinating part doesn't want to hear that. It doesn't want you to get kicked out of your PhD program, so it certainly doesn't want to consider an argument for something that might get you kicked out!
So then you are trying to unblend from the anti-procrastinating part in order to have empathy for the procrastinating part. But the anti-procrastinating part is also the one which is trying to drive the session forward, and it can't unblend from you while still driving the session! So the need to unblend and the desire to fix the procrastinating part get in conflict, and the process gets stuck.
Effectively, the anti-procrastination part would need to turn itself off, and it doesn't know how to do that. But what you can do, is give control of the session to somebody else, and let them tell you what to do. Once the anti-procrastinating part no longer needs to drive the session, it becomes possible for it to move to the side, and then for you to listen to both parts with empathy.
Suppose that one day, you happen to run into a complete stranger. You don’t think very much about needing to impress them, and as a result, you come off as relaxed and charming.
The next day, you’re going on a date with someone you’re really strongly attracted to. You feel that it’s really really important for you to make a good impression, and because you keep obsessing about this thought, you can’t relax, act normal, and actually make a good impression.
Suppose that you remember all that stuff about cognitive fusion. You might (correctly) think that if you managed to defuse from the thought of this being an important encounter, then all of this would be less stressful and you might actually make a good impression.
But this brings up a particular difficulty: it can be relatively easy to defuse from a thought that you on some level believe is, or at least may be, false. But it’s a lot harder to defuse from a thought which you believe on a deep level to actually be true, but which it’s just counterproductive to think about.
After all, if you really are strongly interested in this person, but might not have an opportunity to meet with them again if you make a bad impression... then it is important for you to make a good impression on them now. Defusing from the thought of this being important, would mean that you believed less in this being important, meaning that you might do something that actually left a bad impression on them!
You can’t defuse from the content of a belief, if your motivation for wanting to defuse from it is the belief itself. In trying to reject the belief that making a good impression is important, and trying to do this with the motive of making a good impression, you just reinforce the belief that this is important. If you want to actually defuse from the belief, your motive for doing so has to come from somewhere else than the belief itself.
IMO, a textbook would either overlook big chunks of the field or look more like an enumeration of approaches than a unified resource.
Textbooks that cover a number of different approaches without taking a position on which one is the best are pretty much the standard in many fields. (I recall struggling with it in some undergraduate psychology courses, as previous schooling didn't prepare me for a textbook that would cover three mutually exclusive theories and present compelling evidence in favor of each. Before moving on and presenting three mutually exclusive theories about some other phenomenon on the very next page.)
I would guess that this sort of reasoning happens a lot. In concrete terms:
A person (call her Alice) forms a heuristic — “I am good at X” — where X isn’t perfectly defined. (“I am good at real-world reasoning”; “I am good at driving”; “I am a good math teacher”.) She forms it because she’s good at X on a particular axis she cares about (“I am good at statistical problem solving”; “I drive safely”; “My algebraic geometry classes consistently get great reviews”).
Here's a mistake which I've sometimes committed and gotten defensive as a result, and which I've seen make other people defensive when they've committed the same mistake.
Take some vaguely defined, multidimensional thing that people could do or not do. In my case it was something like "trying to understand other people".
Now there are different ways in which you can try to understand other people. For me, if someone opened up and told me of their experiences, I would put a lot of effort into really trying to understand their perspective, to try to understand how they thought and why they felt that way.
At the same time, I thought that everyone was so unique that there wasn't much point in trying to understand them by any other way than hearing them explain their experience. So I wouldn't really, for example, try to make guesses about people based on what they seemed to have in common with other people I knew.
Now someone comes and happens to mention that I "don't seem to try to understand other people".
I get upset and defensive because I totally do, this person hasn't understood me at all!
And in one sense, I'm right - it's true that there's a dimension of "trying to understand other people" that I've put a lot of effort into, in which I've probably invested more than other people have.
And in another sense, the other person is right - while I was good at one dimension of "trying to understand other people", I was severely underinvested in others. And I had not really even properly acknowledged that "trying to understand other people" had other important dimensions too, because I was justifiably proud of my investment in one of them.
But from the point of view of someone who had invested in those other dimensions, they could see the aspects in which I was deficient compared to them, or maybe even compared to the median person. (To some extent I thought that my underinvestment in those other dimensions was virtuous, because I was "not making assumptions about people", which I'd been told was good.) And this underinvestment showed in how I acted.
So the mistake is that if there's a vaguely defined, multidimensional skill and you are strongly invested in one of its dimensions, you might not realize that you are deficient in the others. And if someone says that you are not good at it, you might understandably get defensive and upset, because you can only think of the evidence which says you're good at it... while not even realizing the aspects that you're missing out on, which are obvious to the person who is better at them.
Now one could say that the person giving this feedback should be more precise and not make vague, broad statements like "you don't seem to try to understand other people". Rather they should make some more specific statement like "you don't seem to try to make guesses about other people based on how they compare to other people you know".
And sure, this could be better. But communication is hard; and often the other person doesn't know the exact mistake that you are making. They can't see exactly what is happening in your mind: they can only see how you behave. And they see you behaving in a way which, to them, looks like you are not trying to understand other people. (And it's even possible that they are deficient in the dimension that you are good at, so it doesn't even occur to them that "trying to understand other people" could mean anything else than what it means to them.)
So they express it in the way that it looks to them, because before you get into a precise discussion about what exactly each of you means by that term, that's the only way in which they can get their impression across.
I would guess that this sort of reasoning happens a lot. In concrete terms:
A person (call her Alice) forms a heuristic — “I am good at X” — where X isn’t perfectly defined. (“I am good at real-world reasoning”; “I am good at driving”; “I am a good math teacher”.) She forms it because she’s good at X on a particular axis she cares about (“I am good at statistical problem solving”; “I drive safely”; “My algebraic geometry classes consistently get great reviews”).
5. You know what you know, but you don’t know what you don’t know. Suppose each doctor makes errors at the same rate, but about different things. I will often catch other doctors’ errors. But by definition I don’t notice my own errors; if I did, I would stop making them! By “errors” I don’t mean stupid mistakes like writing the wrong date on a prescription, I mean fundamentally misunderstanding how to use a certain treatment or address a certain disease. Every doctor has studied some topics in more or less depth than others. When I’ve studied a topic in depth, it’s obvious to me where the average doctor is doing things slightly sub-optimally out of ignorance. But the topics I haven’t studied in depth, I assume I’m doing everything basically okay. If you go through your life constantly noticing places where other doctors are wrong, it’s easy to think you’re better than them. [...]
7. You do a good job satisfying your own values. [Every doctor] wants to make people healthy and save lives, but there are other values that differ between practitioners. How much do you care about pain control? How much do you worry about addiction and misuse? How hard do you try to avoid polypharmacy? How do you balance patient autonomy with making sure they get the right treatment? How do you balance harms and benefits of a treatment that helps the patient’s annoying symptom today but raises heart attack risk 2% in twenty years? All of these trade off against each other: someone who tries too hard to minimize use of addictive drugs may have a harder time controlling their patients’ pain. Someone who cares a lot about patient autonomy might have a harder time keeping their medication load reasonable. If you make the set of tradeoffs that feel right to you, your patients will do better on the metrics you care about than other doctors’ patients (they’ll do better on the metrics the other doctors care about, but worse on yours). Your patients doing better on the metrics you care about feels a lot like you being a better doctor.
Your metaphor doesn't quite work, because you are trying really hard to show me the color red, only to then argue I'm a fool for thinking there is such a thing as red.
No? I am trying to point you to something in your subjective experience, exactly because it is something that exists in your experience, and which seems like an integral part of how minds are organized. I'm definitely not going to argue that you are a fool for having it, because by default everyone has it.
As in, it might be that no person on Earth has such a naive concept of subjective experience, but they are not used to expressing it in language, then when you try to make them express subjective experience in language and/or explain it to them, they say
Oh, that makes no sense, you're right
Instead of saying:
Oh yeah, I guess I can't define this concept central to everything about being human after 10 seconds of thinking in more than 1 catchphrase.
But my claim is not "there's a concept in your experience that you can't define in words"... I defined it in words in my article! I even explained it in third-person terms, in the sense of "if a computer program made the same mistake, what would be the objectively-verifiable mistake in that."
I am just saying that while the mistake is perfectly easy to define in third-person terms, I cannot give you a definition that would directly link it up to your first-person experience. Because while words can be used to point at the experience, they cannot define the experience in a way that would create it.
We can see where a computer program that committed this mistake would go wrong, but we do not see ourselves from a third-person perspective, so I cannot give you a third-person explanation that would cause the third-person explanation and the first-person experience to link up directly. But I can suggest ways in which you can examine your first-person experience, and then when you have the third-person explanation, the two can link up.
(Note that I am explicitly deviating from the Buddhist writers who say that it's intrinsically impossible to understand what's going on. I get why they are saying that: the Buddhists of old didn't know about computers or simulations, so they didn't have a third-person framework in which the thing can be explained. But we do, and that's why I've explicitly given you the third-person framework, or at least tried to.)
A person who is shown red for the first time could also say "oh, right, that's red; you're right that I couldn't have defined it in words", but unlike your comment suggests, the "I couldn't have defined it in words" isn't the important part of the "oh". The important part is "oh, now I can assign a meaning to your sentence in a way that causes its odd syntax to make sense, and now I can think more clearly about what something like 'seeing red' means".
But again, what I'm saying above is subjective, please go back and consider my statement regarding language, if we disagree there, then there's not much to discuss (or the discussion is rather much longer and moves into other areas), because at the end of the day, I literally can not know what your talking about.
If I may ask, how much time did you spend actually following the suggestions in the post and trying to find what the thing that I'm pointing at?
It's certainly not "literally impossible". Some are lucky enough to find it the moment they are pointed towards it. Others may have difficulty, and of course, given the fact that human minds vary and some people lack universal experiences, I cannot disprove the possibility that there could some people who naturally lack this experience at all.
But I do expect that most people can find it - maybe it takes a minute, maybe ten, maybe a year, I have no idea of what the average and the median here might be. But you have to actually try looking for it.
Well, suppose you had never seen the color red, and I wanted us to have a discussion of what red looks like. You would tell me that in order to know what red looks like, I need to first define it in terms of the concepts you are already familiar with.
This makes sense, but if we had to do that with every concept, it wouldn't work, because then we wouldn't have any concepts to start out from. And if you've never seen anything reddish, I can't give you an explanation that would let you derive red from the concepts you are already familiar with.
So instead I might tell you "see that color that my finger is pointing at? That's red." And then you could look, and hopefully say "oh, okay, I get it now."
I'm trying to do the same thing here. Of course, the problem is, I'm trying to point at an aspect of internal experience, rather than anything in the external world.
But I've done the best I can to give you pointers towards the thing that I expect to be found within your experience if you just know where to look. To extend the color analogy, this is as if I knew there was a line of increasingly reddish objects arrayed somewhere, and I told you to go find the first object and follow along the line and watch them getting increasingly red, and then at the end, you would know what red looks like.
You said that the Kaj/Harris/Kelly/etc. thing is a rather bad philosophy. It is if you evaluate it in terms of a philosophy that is supposed to have a self-contained argument! But that's not its purpose - or at least not the starting point. The purpose is to give you a set of instructions that are hopefully good enough to point out the thing it's talking about, and then when you've looked at your experience and found it, you'll get what the rest is trying to say.
To answer the question of "how to describe internal experience": you could practice describing felt senses in more detail. For example, recently when I found my mind resisting the idea of doing something, I said "when I think of doing this, it feels like there's a part of my mind that says NO, and then I have a sense of there being a brick wall in front of me and it feels like if I try to push through, I'll just end up with a splitting headache". This was literally my experience.
To answer the question of "how to reliably signal internal experience": I'd say you can't. If you are looking for something that will always convince your friends of your experience, then there is no such thing: they could always believe that you were faking, or maybe not even faking but somehow subconsciously deluding yourself. Which you could be!
To believe your report, your friends have to have at least some genuine curiosity for, and openness to, your experience. If your friends don't have that, then - as others have mentioned - it would be better to look for better friends.
To answer the question of "what to do when I think I am doing my best but an outside view suggests that I am being needlessly defeatist": I think that in this case, even if the outside view was right, the best answer would not necessarily be to force yourself forward and work harder.
Well, it depends on the circumstances - maybe you have something left undone that really needs to be done now for you to pay your rent next month, in which case, yeah probably just push yourself.
But in general, this kind of situation means that a part of your mind has information that makes it believe it is an important priority to stop you from doing whatever it is that you feel defeatist about. If you force yourself through, that may work in the short term, but the mind will react to that by noticing that you are doing something that it perceives to be dangerous, and increase the amount of resistance until you become unable to continue pushing through the thing. (If the resistance is mild, this might not be true, especially if pushing through gets you something that feels genuinely rewarding to counterbalance it; but often it is.)
In that case, what you want to do is not to push through, but take the time to find the source of that resistance and investigate why it is that your mind considers this to be a bad idea. If it's mistaken, it can be possible to reconsolidate the emotional learning that's blocking you. Though I suspect that in a lot of cases that lead to burnout, it's actually the other way around: you are doing something because a part of your mind has the mistaken belief that doing this will lead you to something that it is optimizing for, with the rest of the mind throwing up resistance because it knows that fact to be mistaken.
Look at an object in front of you. Spend a moment simply examining its features.
Become aware of the sensation of being someone who is looking at this object. While letting your attention rest on the object, try to notice what this sensation of being someone who is looking at the object feels like. Does it have a location, shape, or feel?
and some of the map discussion following it was inspired by the "mindful glimpses" in The Way of Effortless Mindfulness; this page has three more examples. If they seem like some of them work for you (they probably work better if someone reads the steps aloud for you), then the book has several more. (It also discusses some theory but that theory is mostly not very great/useful; what I found valuable was actually trying the glimpse practices.)
The English language lacks the concept of "being aware from a sensation", actually, the English language lacks any concept around "sensation" other than "experiencing it".
Not sure if I'm understanding you correctly, but that sounds like it might be part of the exact issue I'm pointing at? That the concept of a sensation is something that one experiences ("being aware of"), but that in the phenomenological experience something happens to make the experience into one of something that we don't have a word for ("experiencing from") - and which doesn't even really make sense when you think about it.
I acknowledge that "being aware from a sensation" isn't an expression that exists, in standard English - exactly because the concept is incongruent. But if I am trying to suggest that the mind tends to create an experience that is logically impossible when you think about it, I'm not sure how to do that without using an expression that doesn't exist in standard English exactly because we don't usually talk about logically impossible things?
When you say that I am making mistakes that usually go unnoticed because they don't usually matter so we don't pay close attention - that's kind of similar to what I'm saying is happening in the mind. That the mind usually makes some mistaken assumptions which generally go unnoticed because one doesn't stop to closely examine them, but once one does start closely examining their own experience, one may start noticing something odd.
Often what happens is that the confusion kind of shifts around - since the mistaken assumption implies a logical impossibility, an attempt to look at it shifts the assumption to a slightly different location, that you are not currently looking at... but you can at least become aware of the fact that your experience was just modified on-the-fly to hide the logical inconsistency, and if you move your attention to where the inconsistency relocated, it moves again.
Alternatively, the system can try to correct the inconsistency by flipping into a state that genuinely doesn't have it, in a way that puts you into something of an altered state of consciousness. But if you stop looking for the inconsistency, it's easy to flip back into one that does have it.
First, very important, what is "It", the subject of this sentence, try to define "It" and you see the problem vanishes or the sentence no longer makes sense.
The 'it' was defined in the immediately preceding sentence so I'm a little confused by this suggestion, but I can define it in the sentence itself too: "The sensation of looking at the world from behind your eyes is a computational representation of your location, rather than being the location itself."
Hmm... thinking about your comment about two different levels, it occurs to me that what you could interpret me to mean by such a sentence would be something like
"The sensation of looking at the world from behind your eyes is a computational representation of your location (appearance in consciousness), rather than being the location (external world) itself"
or some similar mixing of two perspectives. If so, I don't think that I am mixing the levels; rather, this is intended to stay solely on the level of "everything in the external world is an appearance in my consciousness". Here's an attempted rewrite of the sentence to avoid the possibility for ambiguity:
"The sensation of looking at the world from behind your eyes is a computational representation of the location from which you are viewing that what you are seeing, but since the sensation is embedded in 'that which you are seeing', your actual perspective - to the extent that "perspective" is a meaningful term - must be external to the sensation of looking at the world."
Grammarly seems to clear that paragraph as syntactically correct. :)
"The world" is just "inside my brain", but that world the includes the physical representation of my body, which is part of it, and that physical representation is still "outside and looking out at the world".
do you mean something like "Imagine that you were building a robot to navigate in a world, and to have an internal 3D representation of its location relative to its surroundings. If something like manipulation of physical objects was important for the robot, then the robot's representation of the world would also include a simulated 3D body that corresponded to its real physical body, and it would be correct to represent that body as looking at the surrounding world"?
More broadly, an AI only needs to think that starting a nuclear war has higher expected utility than not starting it.
E.g. if an AI thinks it is about to be destroyed by default, but that starting a nuclear war (which it expects to lose) will distract its enemies and maybe give it the chance to survive and continue pursuing its objectives, then the nuclear war may be the better bet. (I discuss this kind of thing in "Disjunctive Scenarios of Catastrophic AI Risk".)
Most examples of collider bias struck me as unintuitive, and it seems very unlikely that I'm worse than average at causal reasoning.
Is that because they are intrinsically unintuitive, or because they are expressed in an unfamiliar way? I would guess that if one starts by explaining the mono case, then points out how it is analogous to the formal structure (the way Zack did and your quoted Wikipedia example did), then it would be relatively easily for people to get. Whereas if there's an explanation that e.g. starts off from a very mathematical and formal presentation, then it's harder to connect with what you already know intuitively.
then why haven't I ever heard someone answer the complaints of "women like bad boys" by (informally) explaining collider bias?
Is that an example of collider bias? If it were, then one would expect to also hear similar complaints about women's (or for that matter men's) preference for many other traits that are perceived negatively, e.g. "women like guys without money" or "men like unattractive women". The fact that it's "bad boys" that gets singled out in particular suggests that there is actually something special about that trait, and the standard explanations (e.g. that confidence is attractive and that badness correlates with confidence) seem reasonable to me.
My objection would have been that these feel like totally arbitrary ways of illustrating "capitalism" and "socialism". Sure, they single out some aspects of the two that have enough of a resemblance to the real-world systems that you can justify the choice, but someone who was sympathetic to socialism could have constructed a very different setup that would have made the kids more sympathetic to it. (E.g. representing capitalism with something where you could invest your candy to get more candy, combined with a random initial division of candy, to create a "the rich get richer and the poor get poorer" effect.)
That said, I like your observation of the bait-and-switch where kids actually got the candy represented by the socialism system! I wonder how conscious that was on the teacher's part.