Posts
Comments
I liked this for the idea that fear of scarcity can drive "unreasonable" behaviors. This helps me better understand why others may behave in "undesirable" ways and provides a more productive way of addressing the problem than blaming them for being e.g. selfish. This also provides a more enjoyable way of changing my behaviors. Instead of being annoyed with myself for e.g. being too scared to talk to people, I look out for tiny accomplishments (e.g. speaking up when someone got my order wrong) and the benefits it brings (e.g. getting what I wanted to order), to show myself that I am capable. The more capable I feel, the less afraid I am of the world.
This essay had a significant influence on my growth in the past two years. I shifted from perceiving discomfort as something I am subject to, to considering my relationship with discomfort as an object that can be managed. There are many other writings and experiences that contributed to this growth, but this was the first piece I encountered that talked about managing our relationship with hazards as a thing we can manipulate and improve at. It made me wonder why all human activity may be considered running in the meadow and why contracting may be bad, it showed me how dangers can be mitigated through clearer communication of boundaries, it made me aware of how people can be hazards too.
After working through Nook Nature, I think I sort of understand now why contracting might be bad. Trying to manage my fears and do things (instead of just trying to avoid mistakes) has indeed led to a more enjoyable experience and makes me feel more alive. However, I still stand by my original comment, in that I'm not quite clear what exactly the author is trying to convey.
Something that strikes me as I reread this piece is that I can't tell which are the assumptions, the claims, and the arguments. For example, the essay says that Meadow Theory claims contraction is bad, as in "it is the claim of this theory and this philosophy that this is bad". Yet there does not seem to be an explanation or argument for why this claim might be true. Does that mean we are supposed to take it as an assumption instead?
I don't know how I would rewrite this essay to make it clearer, but if I were to write a piece to myself that captures part of what I have learnt, it would look something like this:
Meadow Theory, remixed
Life is more rewarding when we have a larger surface area of contact with reality
Expanding our surface area of contact with reality enriches our lives. We can expand into new areas, such as traveling to new places or growing a company, or delve deeper into specific areas, like honing our skills in cooking or mastering a musical instrument. Growth makes life more enjoyable and fulfilling.
But explorations expose us to hazards
Unfortunately, life is filled with hazards, both big and small, and exploring brings us into contact with more of such hazards. For instance, when we travel to a new country, we may face unfamiliar food, language barriers, or cultural misunderstandings. Similarly, as we hone our culinary skills, we may come across complex techniques that have greater risks, such as flambéing or working with sharp knives.
Hazards hurt us, so we try to eliminate them from our experience
Hazards are unpleasant and can be dangerous, so our instinct is to eliminate them. And if we can’t, we try to eliminate them from our experiences. For example, if we can’t eradicate a disease, then maybe we use antimicrobial soap to wash our hands, or we avoid crowded areas. We think that hazards are the problem to be dealt with, but is this really the case?
Meadow & Posts
Let’s consider an analogy. Imagine you are running freely in a meadow. You're blindfolded, but that's fine, because the meadow is safe. Now, imagine someone informing you that there is a single post somewhere in the meadow. You might get hurt if you run headlong into a post! What do you do? You slow down and feel your way through, just in case the post is right in front of you.
We contract because we are afraid of getting hurt
Suppose the person had been mistaken and there isn’t actually any post in the meadow. Would anything change? No, you still move slowly because you believe there is a post out there. You contract not because there is actual danger, but because you are (sanely) afraid of getting hurt.
Being afraid is unpleasant, so we strive to eliminate posts from our explorations
Our instinctive response is to get rid of posts, or at least get rid of the possibility of encountering posts as we traverse the meadow. We avoid areas known to contain posts, like how people who are afraid of being laughed at might avoid performing on stage. We stick to known routes, like those who choose to remain in their hometowns simply because it feels comfortable, or people who only read books that get good reviews so they won’t waste their time on bad books.
We also help others to avoid encountering posts
When we have a responsibility for or are helping others, we also strive to eliminate posts from their explorations of reality. We ban children from playing outside, because it is dangerous. We tell our employees exactly what to do, so they won't do it wrongly.
However, avoiding posts leads to a more limited experience
Trying to avoid all posts is costly. There are many hazards in the world. Trying to eliminate all hazards from your experience of the world leads to an increasingly narrow life. You wake up in a city you hate, because you're afraid to move to a new place. You stay in a numbing job, because you fear rejection in your job applications. You avoid talking to people, because you’re afraid they might laugh at you. You don't really try to improve your skills, because you're afraid of discovering you’re not so talented after all. In striving to avoid all potential risks, we end up living a limited life.
What if there's a better way?
Imagine if you knew that the meadow contained only one post, and you managed to locate it. You would feel relieved, knowing that it's safe everywhere else, and you could resume running freely.
But as you venture further into the meadow, your certainty about the post's location diminishes. You start to slow down again, because the danger can be anywhere. You contract, not just because you are afraid of danger, but because you're not sure where the danger lies. If the post was on top of a small hill, then you would still be able to run freely, slowing down only once you sense the ground sloping upwards. You can’t tell if you are nearing a post, so you slow down everywhere.
Managing uncertainty for ourselves
Rather than trying to eliminate all posts, the key is to become better at discerning where hazards are more likely to be, so that we can take the appropriate amount of caution. There are several approaches to managing this uncertainty for ourselves.
One approach is to seek guidance from those who have explored the same area. For example, learning from mentors or seeking advice from experts can provide valuable insights and reduce uncertainty. Maybe we learn from our elders that "pride comes before a fall", so we know to pay attention to whether we are becoming arrogant and careless.
Another approach is to familiarize ourselves with the terrain, so we gain the knowledge and experience that allow us to better predict where posts tend to be. Maybe after cold calling hundreds of strangers, we start to figure out what leads to better results and what leads to rejections.
We can also get better at seeing, or by making our blindfolds less opaque. Rationality skills, for example, can help us improve at the general skill of seeing reality for what it is, as opposed to what we perceive.
Yet another approach is to increase our capacity to handle potential hazards. As we grow and develop, our ability to navigate challenges expands. A post that is the size of a grass stalk may be fatal to someone the size of an ant, but a mere irritation to someone as big as a human. For example, the more self-assured we are, the less impact others' opinions have on our self-esteem. Similarly, having more financial resources allows us to take greater financial risks.
Notice that all these approaches encourage you to explore reality, rather than shrink from it. Better yet, these explorations can help you get better at navigating the meadow, so you can explore parts of the meadow that contain larger, more dangerous posts. These approaches enable you to explore more of the world, not less.
Managing uncertainty for others
The principles of managing uncertainty also apply when we are helping others. Rather than trying to completely shield them from all hazards, we can set boundaries and provide guidance to help them navigate their own explorations. For instance, providing the critical guidelines for a junior team member would ensure they do not make catastrophic mistakes, while still allowing them to learn from their own errors. We can teach children to notice how hunger affects their emotions, rather than just telling them what and when to eat. Such an approach promotes growth and resilience while still providing a safety net within certain limits.
Living expansively in a world of hazards
In summary, living expansively in a world of hazards means understanding and managing risks rather than trying to eliminate all possibilities of danger. We don’t need to ensure that there are no hazards, just ensure that we approach hazards appropriately. We want to be more cautious in areas where there is greater danger to us, and to get better at dealing with hazards so we can explore more areas expansively.
What you think of as a failure to fully eliminate all hazards may in fact be a deliberate decision to hold back so as to promote a healthier, more productive approach to dealing with hazards in the world.
I used to deal with disappointment by minimizing it (e.g. it's not that important) or consoling myself (e.g. we'll do better next time). After reading this piece, I think to myself "disappointment is baby grief".
Loss is a part of life, whether that is loss of something concrete/"real" or something that we imagined or hoped for. Disappointment is an opportunity to practice dealing with loss, so that I will be ready for the inevitable major losses in the future. I am sad because I did not get what I'd wanted or hoped for, and that is okay.
Hmm interesting. I agree that there is a difference between a claim about an individual's experience, and a claim about reality. The former is about a perception of reality, whereas the latter is about reality itself. In that case, I see why you would object to the paraphrasing—it changes the original statement into a weaker claim.
I also agree that it is important to be able to make claims about reality, including other people's statements. After all, people's statements are also part of our reality, so we need to be able to discuss and reason about it.
I suppose what I disagree with thus that the original statement is valid as a claim about reality. It seems to me that statements are generally/by default claims about our individual perceptions of reality. (e.g. "He's very tall.") A claim becomes a statement about reality only when linked (implicitly or explicitly) to something concrete. (e.g. "He's in the 90th percentile in height for American adult males." or "He's taller than Daddy." or "He's taller than the typical gymnast I've trained for competitions.")
To say a stated reason is "bizarre" is a value judgment, and therefore cannot be considered a claim about reality. This is because there is no way to measure its truth value. If bizarre means "strange/unusual", then what exactly is "normal/usual"? How Less Wrong posters who upvoted Said's comment would think? How people with more than 1000 karma on Less Wrong would think? There is no meaning behind the word "bizarre" except as an indicator of the writer's perspective (i.e. what the claim is trying to say is "The stated reason is bizarre to Said").
I suppose this also explains why such a statement would seem insulting to people who are more Duncan-like. (I acknowledge that you find the paraphrase as insulting as the original. However, since the purpose of discussion is to find a way so people who are Duncan-like and people who are Said-like can communicate and work together, I believe the key concern should be whether or not someone who is Duncan-like would feel less insulted by the paraphrase. After all, people who are Duncan-like feel insulted by different things than people who are Said-like.)
For people who are Duncan-like, I expect the insult comes about because it presents a subjective (social reality) statement in the form of an objective (reality) statement. Said is making a claim about his own perspective, but he is presenting it as if it is objective truth, which can feel like he is invalidating all other possible perspectives. I would guess that people who are more Said-like are less sensitive, either because they think it is already obvious that Said is just making a claim from his own perspective or because they are less susceptible to influence from other people's claims (e.g. I don't care if the entire world tells me I am wrong, I don't ever waver because I know that I am right.)
Version 3 is very obviously definitely not the same content and I don't know why you bothered including it.
I included Version 3 because after coming up with Version 2, I noticed it was very similar to the earlier sentence ("I definitely no longer understand."), so I thought another valid example would be simply omitting the sentence. It seemed appropriate to me because part of being polite is learning to keep your thoughts to yourself when they do not contribute anything useful to the conversation.
I'm curious, what do you think of these options?
Original: "I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here."
New version 1: "I can't form any coherent model of your thinking here."
New version 2: "I don't understand your stated reason at all."
New version 3: Omit that sentence.
These shift the sentence from a judgment on Duncan's reasoning to a sharing of Said's own experience, which (for me, at least) removes the unnecessary/escalatory part of the insult.
I'm very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?
The author can say that a reader's post is an inaccurate representation of the author's ideas, but how can the author possibly read the reader's mind and conclude that the reader is doing it on purpose? Isn't that a claim that requires exceptional evidence?
Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won't matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).
I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author's intent or the majority of readers' understanding), rather than their intent (e.g. saying someone is strawmanning).
To be against both strawmanning (with weak evidence) and 'making unfounded statements about a person's inner state' seems to me like a self-contradictory and inconsistent stance.
Still trying to figure out/articulate the differences between the two frames, because it feels like people are talking past each other. Not confident and imprecise, but this is what I have so far:
Said-like frame (truth seeking as a primarily individual endeavor)
- Each individual is trying to figure out their own beliefs. Society reaches truer beliefs through each individual reaching truer beliefs.
- Each individual decides how much respect to accord someone, (based on the individual's experiences). The status assigned by society (e.g. titles) are just a data point.
- e.g. Just because someone is the teacher doesn't mean they are automatically given more respect. (A student who believes an institution has excellent taste in teachers may respect teachers from that institution more because of that belief, but the student would not respect a teacher just because they have the title of "teacher".)
- If a student believes a teacher is incompetent and is making a pointless request (e.g. assigned a homework exercise that does not accomplish the learning objectives), the student questions the teacher.
- A teacher that responds in anger without engaging with the student's concerns is considered to be behaving poorly in this culture. A teacher who is genuinely competent and has valid reasons should either be able to explain it to the student or otherwise manage the student, or should have enough certainty in their competence that they will not be upset by a mere student.
- e.g. Just because someone is the teacher doesn't mean they are automatically given more respect. (A student who believes an institution has excellent taste in teachers may respect teachers from that institution more because of that belief, but the student would not respect a teacher just because they have the title of "teacher".)
- Claims/arguments/questions/criticisms are suggestions. If they are valid, people will respond accordingly. If they are not, people are free to disagree or ignore it.
- If someone makes a criticism and is upset when no one responds, the person who criticizes is in the wrong, because no one is obliged to listen or engage.
- The ideal post is well-written, well-argued, more true than individuals' current beliefs. Through reading the post, the reader updates towards truer beliefs.
- If a beginner writes posts that are of poorer quality, the way to help them is by pointing out problems with their post (e.g. lack of examples), so that next time, they can pre-empt similar criticisms, producing better quality work. Someone more skilled at critique would be able to give feedback that is closer to the writer's perspective, e.g. steelman to point out flaws, acknowledge context (interpretive labor).
- The greatest respect a writer can give to readers is to present a polished, well-written piece, so readers can update accordingly, ideally with ways for people to verify the claims for themselves (e.g. source code they can test).
- The ideal comment identifies problems, flaws, weaknesses or provides supporting evidence, alternative perspectives, relevant information for the post, that helps each individual reader better gauge the truth value of a post.
- If a commenter writes feedback or asks questions that are irrelevant or not valuable, people are free to ignore or downvote it.
- The greatest respect a commenter can give to writers is to identify major flaws in the argument. To criticize is a sign of respect, because it means the commenter believes that the writer can do better and is keen to make their post a stronger piece.
Duncan-like frame (truth seeking as a primarily collectivist endeavor)
- Each society is trying to figure out their collective beliefs. Society reaches truer beliefs through each individual helping other individuals converge towards truer beliefs.
- Amount of respect accorded to someone is significantly informed by society. The status assigned by society (e.g. titles) act as a default amount of respect to give someone. For example, one is more likely to believe a doctor's claim that "X is healthier than Y" than a random person's claim that Y is healthier, even if you do not necessarily understand the doctor's reasoning, because society has recognized the doctor as medically knowledgeable through the medical degree.
- e.g. A student gives a teacher more respect in the classroom by default, and only lowers the respect when the teacher is shown to be incompetent. If a student does not understand the purpose of a homework exercise, the student assumes that they are lacking information and will continue assuming so until proven otherwise.
- If a student questions the teacher's homework exercise, teacher would be justified in being angry or punishing the student because they are being disrespected. (If students are allowed to question everything the teacher does, it would be far less efficient to get things done, making things worse for the group.)
- e.g. A student gives a teacher more respect in the classroom by default, and only lowers the respect when the teacher is shown to be incompetent. If a student does not understand the purpose of a homework exercise, the student assumes that they are lacking information and will continue assuming so until proven otherwise.
- Claims/arguments/questions/criticisms are requests to engage. Ignoring comments would be considered rude, unless they are obviously in bad faith (e.g. trolling).
- The ideal post presents a truer view of reality, or highlights a different perspective or potential avenue of exploration for the group. Through reading the post, the reader updates towards truer beliefs, or gets new ideas to try so that the group is more likely to identify truer beliefs.
- If a beginner writes posts that are of poorer quality, the way to help them is to steelman and help them shape it into something useful for the group to work on. Someone more skilled at giving feedback is better at picking out useful ideas and presenting them with clarity and concision.
- The greatest respect a writer can give to readers is to present a piece that is grounded in their own perspectives and experiences (so the group gets a more complete picture of reality) with clear context (e.g. epistemic status, so people know how to respond to it) and multiple ways for others to build on the work (e.g. providing source code so others can try it out and make modifications).
- The ideal comment builds on the post, such as by providing supporting evidence, alternative perspectives, relevant information (contributing knowledge) or by identifying problems, flaws, weaknesses and providing suggestions on how to resolve those (improving/building on the work).
- If a commenter writes feedback or asks questions that are irrelevant or not valuable, the writer (or readers) respond to it in good faith, because the group believes in helping each other converge to the truth (e.g. by helping others clear up their misunderstandings).
- The greatest respect a commenter can give to writers is to identify valuable ideas from the post and build on it.
It feels like an argument between a couple where person A says "You don't love me, you never tell me 'I love you' when I say it to you." and the person B responds "What do you mean I don't love you? I make you breakfast every morning even though I hate waking up early!". If both parties insist that their love language is the only valid way of showing love, there is no way for this conflict to be addressed.
Maybe the person B believes actions speak louder than words and that saying "I love you" is pointless because people can say that even when they don't mean it And perhaps person B believes that that is the ideal way the world works, where everyone is judged purely based on their actions and 'meaningless' words are omitted, because it removes a layer of obfuscation. But the thing is, the words are meaningless to person B; they are not meaningless to person A. It doesn't matter whether or not the words should be meaningful to person A. Person A as they are right now has a need to hear that verbal affirmation, person A genuinely has a different experience when they hear those words; it's just the way person A (and many people) are wired.
If you want to have that relationship, both sides are going to have to make adjustments to learn to speak the other person's language. For example, both parties may agree to tapping 3 times as a way of saying "I love you" if Person B is uncomfortable with verbal declarations.
If both parties think the other party is obliged to adjust to their frame, then it would make sense to disengage; there is no way of resolving that conflict.
I actually think I prefer Said's frame on the whole, even though my native frame is closer to Duncan's. However, I think Said's commenting behavior is counter-productive to long-term shifting of community norms towards Said's frame.
I am not familiar with the history, but from what I've read, Said seems to raise good points (though not necessarily expressed productive ways). It's just that the subsequent discussion often devolves into something that's exhausting to read (like I wish people would steelman Said's point and respond to that instead of just responding directly, and I wish people would just stop responding to Said if they felt the discussion is getting nowhere rather than end up in long escalating conflicts, and I don't have a clear idea of how much Said is actually contributing to the dynamics in such conversations because I get very distracted by the maybe-justified-maybe-not uncharitable assumptions being thrown around by all the participants).
I think there are small adjustments that Said can make to the phrasing of comments that can make a non-trivial difference, that can have positive effects even for people who are not as sensitive as Duncan.
For example, instead of saying "I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here", Said could have said "I don't understand your stated reason at all". This shifts from a judgment on Duncan's reasoning to a sharing of Said's own experience, which (for me, at least) removes the unnecessary insult[1]. I suspect other people's judgments have limited impact on Said's self-perception, so this phrasing won't sound meaningfully different to Said, but I think it does make a difference to other people, whether or not it is ideal that this is how they experience the world. And maybe it's important that people learn to care less about other people's judgments, but I don't think it's fair to demand them to just change instantly and become like Said, or to say that people who are unable or refuse to do that simply should not be allowed to participate at all (or like saying sure you can participate, as long as you are willing to stick your hand in boiling water even though you don't have gloves and I do).
Being willing to make adjustments to one's behavior for the sake of the other party would be a show of good faith, and builds trust. At least in my native frame/culture, direct criticism is a form of rudeness/harm in neutral/low-trust relationships and a show of respect in high-trust relationships, and so building this trust would allow the relationship to shift closer to Said's preferred frame.
Of course, this only works if Duncan is similarly willing to accommodate Said's frame.
I agree that there is something problematic with Said's commenting style/behavior given that multiple people have had similar complaints, and given that it seems to have led to consequences that are negative even within Said's frame. And it is hard to articulate the problem, which makes things challenging. However, it feels like in pushing against Said's behaviors, Duncan is also invalidating Said's frame as a valid approach for the community discourse. This feels unfair to people like Said, especially when it seems like a potentially more productive norm (when better executed, or in certain contexts). That's why it feels unfair to me that Said is unable to comment on the Basics of Rationalist Discourse post.
It's a bit like there's a group of people who always play a certain board game by its rules, while there's another group where everyone cheats and the whole point is to find clever ways to cheat. To people from the first group, cheating is immoral and an act of bad faith, but to the other group, it's just a part of the game and everyone knows that. One day, someone from the first group gets fed up with people from the second group, and so they decide to declare a set of rules for all game players, that says cheating is wrong. And then they add that the only people who get to vote are people who don't cheat. Of course the results aren't going to be representative! And why does the first group have the authority to decide the rules for the entire community?
I don't know for certain if this is the right characterization, but here are a few examples why I think it is more of an issue of differing frames rather than something with clear right/wrong: (I am not saying the people were right to comment as they did, just pointing out that the conflict is not just about a norm, there is a deeper issue of frames)
- In a comment thread, Said says something like Duncan banned Said likely because he doesn't like being criticized, even though Duncan explicitly said otherwise. To Duncan, this is a wrongful accusation of lying, (I think) because Duncan believes Said is saying that Duncan-in-particular is wrong about his own motivations. However, I think Said believes that everyone is incapable of knowing their true motivations, and therefore, his claim that Duncan might be motivated by subconscious reasons is just a general claim that has no bearing on Duncan as a person, i.e. it's not intended as a personal attack. It's only a personal attack if you share the same frame as Duncan.
- When clone of saturn said "However, I suspect that Duncan won't like this idea, because he wants to maintain a motte-and-bailey where his posts are half-baked when someone criticizes them but fully-baked when it's time to apportion status.", I read it to mean that "I suspect" applies to the entire sentence, not just the first half. This is because I started out with the assumption that it is impossible for anyone to truly know a person's motivations, and therefore the only logical reading is that "I suspect" also applies to "he wants to maintain a motte-and-bailey". There's no objective true meaning to the sentence (though one may agree on the most common interpretation). It's like some people when they say "I don't like it", it's implied that "and I want you to stop doing it", but for others it just means "I don't like it" and also "that's just my opinion, you do you". Thus, I personally would consider it a tad extreme (though understandable given Duncan's experiences) to call for moderator response immediately without first clarifying with clone of saturn what was meant by the sentence.
While I do think Said is contributing to the problem (whether intentionally or unintentionally), it would be inappropriate to dismiss Said's frame just because Said is such a bad example of it. This does not mean I believe Said and Duncan are obliged to adjust to each other's norms. Choosing to disengage and stay within their respective corners, is in my opinion, a perfectly valid and acceptable solution.
I didn't really want to speak up about the conflicts between Duncan and other members, because I don't have the full picture. However, this argument is spilling out into public space, so it feels important to address the issue.
As someone who joined about a year ago, I have had very positive experiences on LW so far. I have commented on quite a few of Duncan's posts and my experience has always been positive, in part because I trust that Duncan will respond fairly to what I say. Reading Duncan's recent comments, however, made me wonder if I was wrong about that.
Because I am less sensitive than Duncan, it often felt like Duncan was making disproportionately hostile and uncharitable responses. I couldn't really see what distinguished comments that triggered such extreme responses from other comments. That made me worried that if I'd made a genuine mistake understanding Duncan's point, that Duncan would also accuse me of strawmanning or not trying hard enough, or that I was being deliberately obtuse. After all, I do and have misunderstood other people's words before. Seeing Duncan's explanations on subsequent comments helped me get a better understanding of Duncan's perspective, but I don't think it is reasonable to expect people to read through various threads to get the context behind Duncan's replies.
This means that from an outsider's perspective, the natural takeaway is that we should not post questions, feedback or criticisms, because we might be personally attacked (accused of bad intentions) for what seems like no reason. It is all the more intimidating/impactful given that Duncan is such an established writer. I know it can be unfair to Duncan (or writers in general) because of the asymmetries, but things continuing as they are would make it harder to nurture healthy conflict at LW, which I believe is also counter to what Duncan hopes for the community.
To end off more concretely, here are some of the things I think would be good for LW:
- To consider it pro-social (and reward?) when participants actively choose to slow down, step back, or stop when engaged in unproductive, escalating conflicts (e.g. Stopping Out Loud)
- To be acceptable to post half-baked ideas and request gentler criticisms and have such requests respected, e.g. for critics to make their point clear and step back, if it is clear that their feedback is unwanted, so readers can judge for themselves
- It should be made obvious via the UI if certain people have been blocked, otherwise it gives a skewed perspective.
- When commenting on posts by authors who prefer more collaborative approaches, or for posts that are for half-baked ideas,
- commenters to provide more context behind comments (e.g. why you're asking about a particular point, is it because you feel it is a critical gap or are you just curious), because online communication is more error-prone than in-person interactions and also so it is easier to for both parties to reach a shared understanding of the discussion
- If readers agree with a comment, but the comment doesn't meet the author's preferred requirements, to help refine the comment instead of just upvoting it (might need author to indicate if this is the case though, because sometimes it's not obvious?).
- To be willing to adjust commenting styles or tolerance levels based on who you are interacting with, especially if it is someone you have had significant history with (else just disengage with people you don't get along with)
- If one feels a comment is being unfair, to express that sentiment rather than going for a reciprocal tit-for-tat response so the other has an opportunity to clarify. If choosing to respond in poor form as a tit-for-tat strategy (which I really don't like), to at least make that intent explicit and provide the reasoning.
- To avoid declaring malicious intent without strong evidence or to disengage/ignore the comment when unable to do so. e.g. "You are not trying hard enough to understand me/you are deliberately misunderstanding me.." --> "That is not what I meant. <explanation/request for someone to help explain/choose to disengage>."
- For authors to have the ability to establish the norms they prefer within their spaces, but to be required to respect the wider community norms if it involves the community.
- Common knowledge of the different cultures as well as the associated implications.
- ^
Insult here referring to the emotional impact sense that I'm not sure how to make more explicit, not Said's definition of insult.
Right now it feels like it's an either/or choice between criticism and construction, which puts them in direct opposition, but I don't think they're necessarily in conflict with each other.
After all, criticism that acknowledges the constraints and nuances of the context is more meaningful than criticism that is shallow and superficial, and criticism that highlights a new perspective or suggests a better alternative is more useful than criticism that only points out flaws. In a sense, it's not that there's too much criticism and not enough of contributions, it's that we want critiques that are of higher standards.
Maybe instead of trying to figure out how to determine the right amounts of criticism individuals are exposed to, we can instead focus on building a culture that values and teaches writing of good critique? There would still (and always be) simplistic or nitpicky criticisms, but perhaps if the community were better at identifying them as such, and providing feedback on how to make such comments better, things would improve over time.
Admittedly, I don't really know what this would look like in practice, or whether or not it would make a difference to the experience of authors, but putting the issue in terms of killing Socrates feels like dooming it to win/lose or lose/lose solutions...
Noted, and I appreciate the response.
Copy-paste doesn't seem to work in general, I had to retype the markdown formatting for my comment.
...I don't think the issue here is nuance. My attempt at a non-nuanced non-unfriendly version would be more like "It feels like CYA because those nuances are obvious to you, but they aren't actually obvious to some other people." or maybe "It feels like CYA because you are not the target audience."
As someone who is perhaps overly optimistic about people's intentions in general, I don't really like it when people make assumptions about character/values (e.g. don't care about truth) or read intent into other people's actions (e.g. you're trying to CYA, or you're not really trying to understand me). People seem to assume negative intent with unjustifiable levels of confidence when there can be better alternative explanations (see below), and this can be very damaging to relationships and counterproductive for discussions. I think it might be helpful if we move away from inferring unknowable things and focus more on explaining our own experiences instead? (e.g. I liked the part where DirectedEvolution shared about their experience rewriting the section, and also Duncan's explanation that writing nuance feels genuinely effortless).
Example of an alternative interpretation:
...basically, if you find the distinction tedious, it's strong evidence that you're either blind to the meaningfulness in the first place, or you just don't care.
There is a third possibility I can think of: something may be meaningful and important but omitted because it is not relevant to our current task. For example, when we teach children science, we don't teach them quantum mechanics simply because it is distracting when learning the basics, and not because quantum mechanics is irrelevant or unimportant in general. I personally would prefer it if teachers made this more explicit (i.e. say that they are teaching a simplified model and we would get to learn more details next time) but I get the impression that this is already obvious to other people so I'd imagine it comes across as superfluous to them.
I appreciate this essay because I have experienced a (much milder) version of this "not existing". It helps me feel seen in certain ways. I also like that it helps me understand a different kind of perspective, and that it helps me make sense of Duncan's behavior in some of the comment threads. However, I must admit that while I understand intellectually that this is how Duncan experiences things, I myself can't really imagine it; I don't understand it on the gut level. The below response is influenced by this essay and also recent discussions on other posts.
The spectrum
There seems to be a spectrum in terms of how much weight people give their own experiences compared to things other people say.
On the one end, we have people who believe so weakly in their own experiences that if someone asks them "Why didn't you lock the door?", the first instinct is to doubt themselves and ask "Oh no did I forget?", even if they know that they had locked the door and even checked it multiple times. (If they hear someone say people like them don't exist, they conclude "Maybe I don't actually exist?")
On the other end, we have people who so firmly believe in their own experiences that even if multiple people tell them something that contradicts their own experience, they will simply laugh it off as ridiculous. (If they hear someone say people like them don't exist, they think "Of course I exist, therefore they must be wrong.")
People don't necessarily belong exclusively to one group. One may be very opinionated about taste in music, while at the same time sensitive about their food preferences.
The need for both sides
Both are important:
We need to be able to listen to alternate explanations of our own experiences and to be able to accept that other people can have experiences that are different from ours, because our personal experiences are just a very tiny part of all of human experience. We want to be able to learn from and cater to all the different perspectives, not just our own limited perspective.
Yet, firm belief in our own experiences is useful for ensuring that we don't end up with societal beliefs that are divorced from reality. If everyone is too willing to believe others' words over their own perceptions, if there was no child ready to point out that the emperor is in fact not wearing any clothes, then it seems society would end up with nonsensical beliefs touted by charlatans, beliefs that no one has actually ever personally experienced.
We want, of course, to strike a balance. We want to be able to trust our own experiences: other people's opinions should not be able to negate our own experiences. And yet, we must also be open to the possibility that we are wrong. We should be able to hold both things in our minds at once and weigh them carefully rather than defaulting to one or the other.
It's really hard though (for me, at least). In some areas I default far too easily to believing others over myself. And yet in other areas, I find myself slow to update when I hear about experiences that don't match mine. There's no vocabulary or concept in my world to describe their experiences, so it gets rounded off to something that matches my world without me realizing it. I can't tell that I'm doing it until someone explains it to be in a way I can understand. (And sometimes I get the sense that people on the "own perspective" end are simply incapable of realizing that people can have experiences that are different from theirs.)
Working with people from different parts of the spectrum
Some discussions I have with people feel more collaborative. We both believe the other has something useful to say. If I am struggling to express my thoughts, they help by rephrasing or suggesting possibilities based on their understanding, until we converge upon a common understanding. They may disagree with me, but they make what feels like a genuine attempt to understand what I am trying to say and see if there is some truth to it. It feels like we are working together to figure out the truth. I think people like that are on the "believing what others say" end of the spectrum.
Some discussions feel less equal. It feels like they are so sure and confident in their own perspective that it is my job as the person with the different perspective to convince them that they are wrong. I have to explain things from their perspective; they won't help me understand or clarify my thoughts. It feels like the burden of shifting both of us towards what is true is almost entirely on me. I am the salesperson, trying to sell them my version of reality. They are the customer, waiting to be convinced. I think it's part of what Frame Control was talking about?
Talking to people like that can be exhausting, and quickly becomes very frustrating when I am under high stress. Maybe this is because I have such weak belief in my own opinions. In some areas, it feels like there's some kind of mental/emotional cost incurred when I try to express something that is different from what people commonly believe, because a part of myself believes I am wrong and so I have to expend energy to go against both my beliefs and other people's beliefs.
There are certain conditions, however, where the second type of discussion feels collaborative. Like in the second case, they don't help you express your thoughts, they don't reflect back at you, they poke holes in your argument. And yet, moments later, days later, weeks later, you realize that they are thinking about what you said and that they do actually take into consideration the things you said when they make decisions. If there is equality on a higher level (i.e. there are also conversations where they try to change my mind, and in those cases they are the ones putting in the work of convincing me) and they show that they are listening and willing to change their minds, then it also feels like collaborative discussion, just of a different style. The tricky thing is that I don't think you can tell the difference between the second and third cases immediately, only through repeated interactions.
In other words, sometimes it feels like I'm talking to someone from the "own perspective" end of the spectrum but it turns out that they're more central, it's just that they are conversing in a different style.
A better culture
I'm not sure how true this concept of the spectrum is, but if it were true, is there anything that would help? Here are some ideas based on what I've found helpful:
Help people build trust in their own experiences:
- Stop outright invalidating people's experiences, e.g. "It hurts." "No, it doesn't."; "I hate my baby brother." "No, you don't."
- Take care (sometimes? when you have significant influence over someone?) to distinguish between one's opinions and reality, e.g. "The drink is too sweet for me" instead of "The drink is too sweet". "I think that X, but I may be wrong." (I think it's important for people to learn that "The drink is too sweet" is just someone's opinion, rather than end up thinking that it is other people's responsibility/ to phrase it as an opinion.)
- Teach people to pay attention to their own experiences (needs to be balanced by the concept that we can have blind spots, we may interpret things wrongly etc.), e.g. instead of saying "It is wrong for anyone to touch you in these places", we say "This is your body. If anyone touches you in any way that makes you feel uncomfortable, you can say no etc.")
- Take care to listen to people when they aren't being heard, rather than just dismissing their concerns (e.g. "I'm sure he didn't mean it"), especially when they are people you are close to or you have power over them
Help people see that people's opinions are a reflection of their models rather than an indication of the truth or validity of other people's experiences:
- Take pride in harmless differences, especially if they are things that society typically frowns upon (needs to be countered with respect for society's comfort level with things that are 'weird'): e.g. "People think it's childish, but I like reading children's books!", or "I know people think I have bad taste, but I really like X."
- Celebrate differences in opinions: e.g. one person says "The first one was the best", the other person says "Really? I think the last was the nicest, because X", then the first person responds with "Ah, interesting! I think ..." (or anything genuinely positive)
Create space for individual variations:
- Acknowledge and accommodate differences, e.g. "You can observe first then join in later if you'd prefer."
- Accommodate (on a societal level) the different types of needs (e.g. wheelchair accessibility) and talk about it, so people know about it (e.g. kitchenware for the blind)
- Make it safe for people to express differences e.g. if someone states a different opinion and your response is a "No way! Really?!" and they seem to withdraw or react unexpectedly, make it clear that you are interested in their opinion (e.g. ask a question to learn more) (needs to be genuine)
Help people understand that everyone is different, in ways that we consistently underestimate:
- Share about your experiences (like this essay), or write/share articles/essays about people who experience the world differently (e.g. news articles about difficulties faced by people who have learning disorders) or about typical mind fallacy
- Things like personality quizzes are perhaps harmful in some ways but they do seem quite successful at encouraging people to talk about their differences?
- Encourage people when they share their opinions and participate in conversations (even if badly), because the best way of learning that everyone is different is by having discussions with people who are different. (note that the needs of all parties should still be considered)
I like this for the idea of distinguishing between what is real (how we behave) vs what is perceived (other people's judgment of how we are behaving). It helped me see that rather than focusing on making other people happy or seeking their approval, I should instead focus on what I believe I should do (e.g. what kinds of behaviour create value in the world) and measure myself accordingly. My beliefs may be wrong, but feedback from reality is far more objective and consistent than things like social approval, so it's a much saner goal. And more importantly, it is a goal that encourages genuine change.
"Oh, that," said the king with a shrug. "That isn't your honor, Costis. That's the public perception of your honor. It has nothing to do with anything important, except perhaps for manipulating fools who mistake honor for its bright, shiny trappings. You can always change the perceptions of fools."
-- The King of Attolia, by Megan Whalen Turner
What we want is for perceptions to match with what is real, not for perceptions themselves to be manipulated independently of reality.
I like this because it reminds me:
- before complaining about someone not making the obvious choice, first ask if that option actually exists (e.g. are they capable of doing it?)
- before complaining about a bad decision, to ask if the better alternatives actually exist (people aren't choosing a bad option because they think it's better than a good option; they're choosing it because all other options are worse)
However, since I use it for my own thinking, I think of it more as an imaginary/mirage option instead of a fabricated option. It is indeed an option fabricated by my mind, but it doesn't feel like I made it up. It always feels real, then turns out to be an illusion upon closer examination.
I agree in some sense that for the purpose of my learning/interest, I would rather people err on the side of engaging with less effort than not engaging at all. However, I think community norms need to be more opinionated/shaped because it influences the direction of growth.
The culture I've enjoyed the most is one where high standards is considered desirable by the community as a whole, especially core members, but it is acceptable if members do not commit to living up to those standards (you gain respect for working like a professional, but it is acceptable if you just dabble like an amateur):
- You are only penalised for failing to fulfill your responsibilities/not meeting the basic standards (e.g. being consistently late, not doing your work) and not for e.g. failing to put in extra effort. You have the freedom to be a hobbyist, but you are still expected to respect other people's time and work.
- Good norms are modelled and highlighted so new members can learn them over time
- You need to work at the higher standards to be among the successful/respected within the group (the community values high quality work)
- People who want to work at the higher standards have the space to do so (e.g. they work on a specific project where people who join are expected to work at higher standards or only people who are more serious are selected)
I like it because it feels like you are encouraged or supported or nudged to aim higher, but at the same time, the culture welcomes new people who may just be looking to explore (and may end up becoming core members!). It was for a smaller group that met in person, where new people are the minority, and the skill is perhaps more legible, so I'm not sure how that translates to the online world.
It's also fun being in groups that enforce higher standards, but the purpose of those groups tend to be producing good work rather than reaching out to people and growing the community.
This was interesting! Here's my attempt to make sense of the essay & the comments:
TL;DR We can think of the parts of reality that we have influence over as our surface area of contact with reality. One way of expanding this is increasing our scale of impact (e.g. self -> friends & family -> communities -> world). Since reality is fractal though, you can also expand by engaging more deeply with reality and developing expertise in an area (e.g. beginner -> able to cook good food for yourself). Increasing scale of impact tends to seem more impressive, but delving deeper also expands our agency over reality, just in a less visible manner. This fractal nature of reality also means that regardless of which scale you choose to work at, you will still be able to live a rich and rewardng life.
It's remarkable to me how we are all living in the same physical reality, yet some people seem to be living in much bigger worlds than others. Some work for a salary, organize gatherings with friends, sew or knit, or read autobiographies of famous people. Others start companies, set up non-profit organizations, make software used by hundreds of thousands, or collaborate with famous people. Their worlds feel much bigger: things that are merely painted items on the backdrop of my stage are props they can interact with on theirs.
An easy way to measure this difference is scale of impact. People can generally control their own actions (most of the time) and maybe cajole a loved one to do as they wish. Some can persuade their friends to say, try out a new place for a meal or sign a petition. Fewer can manage departments, fewer still can lead a multi-national company, and yet fewer still can lead a country. Similarly, anyone can write, but only some can publish novels read by millions. People who have a larger scale of impact are more impressive, because they have influence over a larger part of reality. This generally comes along with developing expertise: the more you learn, the more your reality, i.e. parts of the world you have influence or agency over, expands.
The thing is, though, that reality is fractal. The surface of your bubble is not smooth - it consists of many small bubbles, and the surfaces of those bubbles consist of yet smaller bubbles. Thus, there is another way to increase your area of influence. Rather than increasing the size of your bubble or moving to a larger bubble, you can instead delve into the tiny bubbles along the surface.
One can think of developing expertise as learning to make increasingly precise adjustments to the effects you have on the world. When learning music, you start off trying to play the notes you see on your music sheet. Later on you try to play the notes with the dynamics you're imagining in your mind. Later still, you work on using dynamics to convey the emotion you want the listener to feel. As you gain mastery, you explore the smaller bubbles, learning finer ways of influencing the world. Just like moving up the scale of impact, this gives you agency over a larger part of the reality - your action space increases. Only when you've spent time exploring the nuances of music can you play music that moves people. Only when you've spent time exploring the internals of a computer do you have the option of repairing your own laptop. Only when you've spent time shopping and comparing prices for your groceries will you know how to find the best deals.
When you engage with reality, your reality expands. You are rewarded with greater agency regardless of whether you move up the scale (zoom out) or explore the details (zoom in). However, zooming out often sounds more impressive than zooming in, because it is more visible and requires less expertise to detect.
It's easy to compare the number of people reporting to a manager. Also, people who have larger spheres of influence are more likely to be known (you're more likely to read about a CEO than a junior employee). In contrast, not everyone can evaluate how skilled a teacher is at teaching. Furthermore, a competent teacher is not necessarily going to be more well-known than a teacher who is less so. Or to put it another way, a division manager just sounds more impressive than a kindergarten teacher, even if the teacher is better at managing people.
There is a tradeoff between zooming out and zooming in, because you have a finite amount of time. Also, knowing the details can become unimportant (and possibly a waste of time) when you zoom out far enough. It's important for a CTO to have technical expertise, but it would be unnecessary for the CTO to be familar with all the nuances of a programming language.
This is a choice you can make, and maybe some would decide to scale up as much as possible. Reality is fractal though, which means that you can live a rich and rewarding life regardless of which scale you choose to work at. There's always lots of choices. For example, in cooking, you can explore making Mexican dishes, or creative presentation of food, or fusion of unusual flavors, or even finding as many ways as possible to use tofu. (Or you could move up the scale by becoming a chef and opening restaurants, sharing your grandma's recipes on your website, posting Youtube videos so people can learn to make tasty vegetarian dishes, or building software that recommends recipes based on the ingredients you have on hand.) Reality is a very rich place to explore.
It's been an absolute delight using excalidraw, thanks for the rec! Everything just works and it looks pretty:)
For perfectionism, I think never being satisfied with where you're at now doesn't mean you can't take pride in how far you've come?
"Don't feel complacent" feels different from "striving for perfection" to me. The former feels more like making sure your standards don't drop too much (maintaining a good lower bound), whereas the latter feels more like pushing the upper limit. When I think about complacency, I think about being careful and making sure that I am not e.g. taking the easy way out because of laziness. When I think about perfectionism (in the 12 virtues sense), I think about imagining ways things can be better and finding ways to get closer to that ideal.
I don't really understand the 'argument' virtue so no comment for that.
No problem:) Hope it helps & all the best!
I find Feldenkrais generally useful for releasing tension. There are exercises which are targeted at jaw/facial tension like this (tried this once, worked for me), but I find that exercises which release tension in my hips tend to also release tension in my jaws, so looking at exercises for hips may work as well. I've enjoyed working through the exercises in this channel.
Hmm didn't really find anything similar, but here are some examples of rating systems I found that looked interesting (though not necessarily relevant):
2-factor rating systems
SaidIt: (1) Insightful & (2) Fun
SaidIt is a Reddit alternative which seeks to "create an environment that encourages thought-provoking discussion". SaidIt has two types of upvotes to choose from: 1) insightful, and 2) fun.[1]
Goodfilms: (1) quality & (2) rewatchability
Goodfilms is a movie site for users to rate, review, share films and find movies to watch. Users rate movies on two dimensions: quality and rewatchability. The ratings are displayed as a scatterplot, giving users a better sense of the type of movie (e.g. most people agree it is highly rewatchable, but there is disagreement on its quality => may not be very good, but is fun to watch).[2]
Suggestion by Majestic121: (1) Agree/Disagree & (2) Productive/Unproductive
A Hacker News comment by Majestic121 suggests a 2-factor voting system:
Up/Down : Agree/Disagree Left/Right : Makes the discussion go backward/forward
This way you could express disagreement while acknowledging that the point is interesting, or like a joke without having it drown a conversation
Suggestion by captainmuon: (1) Promote/Bury & (2) Reward
Hacker News comment by captainmuon: Promote/Bury and Reward buttons
Up/downvotes always have multiple conflicting dimensions.
- The post is factually right / wrong
- Confirms to the site rules / breaks the rules
- I agree / disagree
- I want to promote / bury this post
- Reward poster with XP / punish
I am usually very pragmatic and upvote a post when I want other people to read it (because I want to see the discussion, or because I want to spread the idea). I also upvote to reward the poster.
I don't tend to downvote factually wrong posts when they are still interesting, because that limits the chance they get good discussion, and because I don't want to punish somebody for Being Wrong On The Internet. I do downvote positions that I find bad in order to reduce their reach.
It would be probably possible to have a site that implements two dimensions as a cross (maybe only for users with a certain XP) although the UX might not be to great. Maybe it is a good idea to have "promote/bury" and "reward" buttons?
Others
Pol.is: displays a cluster graph of participants based on their voting patterns on statements
Pol.is is a platform where participants submit statements for others to vote on (agree/disagree/pass), and participants are then clustered based on their votes. People can see this map of voters and are incentivized to craft statements that also appeal to members of other groups to gain more support, thus converging on a consensus.[3]
Description of graph extracted from Pol.is report:
In this graph, statements are positioned more closely to statements which were voted on similarly. Participants, in turn, are positioned more closely to statements on which they agreed, and further from statements on which they disagreed. This means participants who voted similarly are closer together.
Example case study: vTaiwan Uber
People in Taiwan were invited to discuss the regulation of Uber. At the beginning, there were two groups: pro-Uber and anti-Uber. As people tried to submit statements that would gain more supporters, they converged on a set of seven comments that majority agreed on, such as "It should be permissible for a for-hire driver to join multiple fleets and platforms." These suggestions shaped the regulations that were eventually adopted by the government. [3]
Other Pol.is case studies and reports: https://compdemocracy.org/Case-studies
Tweakers: users can assign scores from -1 to +3, with detailed guidelines on how to vote
Tweakers is a Dutch technology website. Users can assign a score of +3 (insightful), +2 (informative), +1 (on-topic), 0 (off-topic/irrelevant), -1 (unwanted) to comments. The median score is displayed, but users can click to view the breakdown of votes.[4] 0 scores with 0 votes are displayed in a different color from 0 votes with >= 1 votes.
There are detailed guidelines on what each score means, as well as how to handle common scenarios, such as:
- Comments about spelling, typos etc.: -1, because they should be reported in the Dear Editorial Forum instead
- Inaccurate comments: no less than 0 if it is well-intentioned, -1 is for socially inappropriate behavior
- Opinions you disagree with: comments should not receive lower ratings simply because they conflict with your opinion. To express dissatisfaction, reply the comment with your refutation.
- Strategic voting e.g. upvoting a comment because you think its score is too low, rather than because you think it deserves the high score: do not do this
The guidelines also vary based on the context, with the guidelines explaining how moderation practices should differ for downloads, product reviews, and other pages.
(Disclaimer: site was in Dutch so I used Google Translate)
Slashdot: voting by assigning different types of labels (e.g. insightful, redundant)
Moderators can assign different labels to comments, which will add or deduct a point from the comment's score. There are descriptions of what each label means in the FAQ. The labels are as follows:
- Normal (default setting)
- Offtopic
- Flamebait
- Troll
- Redundant
- Insightful
- Interesting
- Informative
- Funny
- Overrated
- Underrated
Placebo button :p
Hacker News comment by kevin_thibedeau:
They should have dummy agree/disagree buttons that disappear once selected and has no function other than to satisfy the malcontents that the machine has recorded their opinion.
In a two-factor voting system, what happens if I'm not sure if I agree or disagree, e.g. because I am still thinking about it?
If agree means "I endorse the claims or reasoning and think that more people should believe them to be true", I would probably default to no (I would endorse only if I'm pretty sure about something, and not endorsing doesn't mean I think it's wrong), so it's more like +1/0 voting. But if agree means "I think this is true", disagree would then mean saying "I think this is false", i.e. more like +1/-1 voting, so I would probably abstain?
Claim 2: Agree/disagree buttons are confusing or even harmful for comments that are making multiple claims. This is significant enough that there should not be an agree/disagree button for comments where agree/disagree buttons are not suitable.
- Agree: The negative consequences are significant enough that there should not be agree/disagree buttons for certain types of comments. For example, authors may be able to decide if they will allow agree/disagree votes on their comment.
- Disagree: It is acceptable to have agree/disagree votes even for posts/comments where this does not make sense, e.g. because people will adjust accordingly. We can add in a feature to disable agree/disagree votes for certain comments, but it is also okay if we don't.
Claim 1C: See claim 1A.
- Agree: I may or may not think that I/other users have this experience, but I think the effects are negative and significant enough, or have the potential to be significant enough that we should see if there are ways to address this when designing a new voting system.
- Disagree: I may or may not think that I/other users have this experience, but I think that the effects are not negative or are negligible enough that we do not need to factor this into the design of a new voting system.
Claim 1B: See claim 1A.
- Agree: This may or may not match my experience, but I believe that for majority (>50%) of users on LW, they are less likely to write replies expressing agreement/disagreement because they can now vote agree/disagree.
- Disagree: This may or may not match my experience, but I believe that majority (>50%) of users on LW, would still write a reply even if they can just vote agree/disagree.
Claim 1A: Agree/disagree buttons disincentivizes productive conversations because clicking the disagree button satisfies the need for expressing disagreement (or agreement) with lower cost (less effort & no reputational cost since votes are anonymous) than writing out a reply. This is a significant enough concern that we should consider its effects when deciding whether or not to go with the new voting system.
- Agree: This matches my experience: I am less likely to write replies expressing agreement/disagreement because I am now able to vote agree/disagree.
- Disagree: This does not match my experience: If I was already going to write a reply, I would still write one even if I can just vote agree/disagree.
This comment is an experiment. I'm trying out a variant of the proposed idea of voting by headings/block quotes: this comment contains my comment, and the replies below contain claims extracted from my comment for agree/disagree voting.
Agree/disagree buttons incentivizes knee-jerk, low-effort reactions rather than deliberate, high-effort responses
Something I like about LW's system of upvotes meaning "things you want to see more of" and having no agree/disagree button is that there's no simple way of expressing agreement or disagreement. This means that when there's something I disagree with, I'm more incentivized to write a comment to express it. That forces me to think more deeply because I need to be able to state clearly what it is I'm agreeing or disagreeing with, especially since it can be quite nuanced. It also feels fairer because if someone went to the effort of writing a comment, then surely it's only fair that I do likewise when disagreeing. (Unless of course it was a low effort comment, in which case I could always just downvote.)
I suspect that if there's an agree/disagree button, the emotional part of me would be satisfied with clicking the disagree button, whereas currently, it pushes me to express my disagreement as a (thought-through, reasoned) reply. I aspire to be someone who responds thoughtfully, but that is not an instinctive behavior. With the disagree button available, I would be fighting instinct rather than working with it. It encourages emotional, knee-jerk reactions rather than deliberate responses.
(It's nice to be able to get a rough gauge of the community's opinion of a statement though. It's not of much practical use in terms of evaluating the truth of a statement, because I prefer to weight different people's opinions differently based on the topic, but it does give a general sense of the community's opinion.)
Agree/disagree buttons are confusing or even harmful for comments that are making multiple claims
There are Telegram channels by news agencies where they post messages for articles. Each message contains a headline and a link to the article, and people would then react to the chat message using emojis. It's quite amusing when there are headlines like "Person X convicted of Y and sentenced to Z" and you see many thumbs up and thumbs down. It makes me wonder, are you showing approval/disapproval for the crime, the conviction, or the punishment? It also seems to contribute to typical mind fallacy/confirmation bias problems.
Similarly, having agree/disagree buttons on comments that have multiple claims doesn't really make sense, because we can't tell which part is being agreed/disagreed with and people might end up interpreting the votes according to their own beliefs.
Suggested alternative: agree/disagree buttons for claims created specifically for voting
Others have suggested allowing voting per heading or by block quote but I think that the way I phrase my comments is different from how I would craft a claim. Also, some statements aren't meant for people to evaluate (e.g. sharing of personal stories, giving encouragement, sharing related posts).
Thus, one possibility I can think of is to let users create claims specifically for agree/disagree voting. Other users (besides the author) can also add in separate claims extracted from the comment. Hopefully, when claims are designed to be agreed/disagreed with, it makes the agree/disagree votes easier to interpret. (Ideally, there should probably be a third option that says "this is not a statement that can be meaningfully agreed/disagreed with" or "this is not a well-crafted statement".)
Thoughts after trying it out
- This is really, really hard. I'm not sure if my claims as well-formulated, and I'm not sure which claims are meaningful to extract from my comment. When there are many ways to disagree with a statement, I can't tell which ways are more meaningful (i.e. where to draw the line for agree/disagreeing or which ones are worth creating separate claims for). It's also very high effort to write compared with a typical comment.
- I notice that a part of me seems to prefer uncontroversial claims to get that validation, while another part of me wants more controversial claims so that it'll be more fun (i.e. won't know what answer to expect).
- Would it make sense to have a separate section where we can view all the claim comments for a post? (probably needs to support some form of sorting by relevance) Would that be a way to help the community reason something out collectively?
- I wonder if it has a tendency to focus attention on a narrower set of ideas, simply because those are the options offered.
MSRayne is saying "no, not in my experience," but afaict MSRayne has also self-identified as being in the set of [people whose personal boundaries already lie outside of the social boundary, such that even things which do not violate the social boundary are already violating their personal boundary].
Yes I'd read about this in the other comment but I think it didn't really register until I saw MSRayne's reply above.
The reply was enough for something to click in my head, possibly because it was a more concrete explanation, but your explanation made the misunderstanding more explicit to me, so thanks!
Oh I think I see what you mean. If there's always a cost to saying no, then all boundary violations are basically threats and hence aggressive.
And I think you always lose something if you say no to someone - always. It is always coercive. It just may not be visible on the surface - but they will resent you a little bit for it, and the more you do it the more resentment will build up.
I recognize this, or at least something like it - it's like when people ask for your opinions. People say that there is no wrong answer and that you should say what you really think, but I always felt that that wasn't true. There are wrong answers, and you will know that they are wrong because people will respond negatively to them (e.g. they like you less afterwards because your opinion differed from theirs). People don't really want to hear what you have to say; they just want validation.
To avoid saying the wrong thing, I ended up trying to figure out what people were hoping to hear (e.g. based on how they phrase their questions), so that I could tell them what they wanted me to say. I didn't even notice that habit until one day when someone asked me a question and I couldn't tell what they wanted - they were completely blank to me. I ended up giving an answer truer to myself, and was expecting a negative response. Yet they didn't show disapproval, and more surprisingly, neither did they show approval. They really just did want my answer!
The experience showed me that something I thought was a trait of all humans was actually more like an attribute that varies based on the individual. Some people just want validation, but others genuinely want to hear what you have to say. That changes the game, because it means it's not actually my job to say what people want to hear, it's just how some people prefer to be dealt with. I can always keep my true thoughts aside for people who want to hear them.
Some time after, I shared my opinion with someone who responded dismissively. Yet days later, they asked me a question that showed that they were thinking about what I'd said. I learned that just because someone responds negatively, it doesn't necessarily mean they are upset with me and want me to be different; sometimes it's just a natural response to hearing something you don't like or even just something new. What's interesting is that had I continued saying what I thought others would want to hear, I wouldn't have realised that people are ok with listening to what I have to say.
There are things I tend to avoid because they weren't good experiences in the past and when I think of doing them now, it just feels like a bad idea. Sometimes when I'm with the right people or in the right context though, my mind realizes that there is a very low likelihood of something terrible happening, it's just my heart that's convinced that something awful will happen. But when my heart wants something badly enough, the risk becomes worth it and so I try it even though it feels scary. So far, it's paid off every time. Sure, sometimes it doesn't go the way I hope for, but then again nothing terrible happened either.
I think the difference is that where I used to pay attention to just my negative experiences, I now also pay attention to when there isn't a negative response, both for my myself and when watching others interact. I notice that the ratio is different from what I'd always thought it was (1:0), because the people I'm with are different, because people change, and because I pay attention to a broader slice of reality. That's why to feels safer to try (with the right people). (There's also that I'm more capable now, and can therefore cope better with anything that might happen.)
I think it's quite interesting how sometimes you can't tell if your beliefs are wrong unless you are willing to do things that past experiences say you shouldn't, and create opportunities to prove your beliefs false. It's like confirmation bias, except I'd never thought to apply it to personal/emotional experiences.
I don't know, can't know what your experiences are like - I couldn't even understand Caperu_Wesperizzon's and your comments. I want to say though, that I think people who are nice and good with boundaries do exist, and I hope that you get to meet them someday.
If you fail to respond adequately you decrease the respect of your comrades (because you can't take it like a man or whatever) and thus by proxy decrease intimacy.
Hmm if you lose respect for responding wrongly then it doesn't really seem like a benign boundary violation anymore? The way I see it, a boundary violation can be considered benign only if you are capable of saying no, and the other person is genuinely capable of accepting and respecting a no. Otherwise, it's more like coercion. (And the violation shouldn't have very negative consequences for the person, based on what can be anticipated. )
If your friend takes your things without asking and you tell them to stop doing it because you don't like it, and they apologise and stop doing it, then that was a benign boundary violation. If they stop but then go around telling others that you are selfish, or they stop and then complain about how they always have to give in to your demands, or they ignore you and tell you that best friends share everything, then that's not benign at all. You can't really tell from the boundary violating action though, only from their response when you say no.
People who are more powerful (e.g. physically stronger, higher social status) are more capable of saying no because the consequences of saying no are less severe for them. In that sense, things that seem like benign boundary violations are more likely to be benign for them, so they tend to see it as benign (and may not realise that this is not the case for others). I don't think it's benign just for the masculine though, because it works the other way around as well. If the person who is violating the boundaries is responsible about it (e.g. sensitive to potential power imbalances), it can also work. Also, boundary violations don't have to be aggressive (?). Here are some examples that are milder/more feminine that I think also count as benign boundary violations (if done properly):
- affectionate nicknames (For a female version of the faggot example, I had a schoolmate who called people "bitch" only if she considered them a friend, e.g. greeting them with "Hey bitch!")
- playing with/braiding someone's hair without asking
- adjusting someone's collar when you see the tag sticking out
- giving someone very sour candy without telling them that beforehand
- untying someone's shoelaces (making sure they notice it before standing up so they don't accidentally trip)
- asking "Can I borrow your pen pretty pretty please? Just 5 seconds! Thanks!" and taking it before you hear them say yes
- asking sensitive questions like salary or asking a woman for her age
- playful emotional manipulation like making puppy eyes at someone to persuade them to share their snack with you (only works if the other person is capable of saying no if they genuinely don't want to do it, and you are capable of truly accepting the rejection, and both parties understand that it's play)
We have dangerous knowledge like nuclear weapons or bioweapons, yet we are still surviving. It seems like people with the right knowledge and resources are disinclined to be destructive. Or maybe there are mechanisms that ensure such people don't succeed. What makes AI different? Won't the people with the knowledge and resources to build GAI also be more cautious when doing the work, because they are more aware of the dangers of powerful technology?
In AI software, we have to define an output type, e.g. a chatbot can generate text but not videos. Doesn't this limit the danger of AIs? For example, if we build a classifier that estimates the probability of a given X-ray being abnormal, we know it can only provide numbers for doctors to take into consideration; it still doesn't have the authority to decide the patient's treatment. This means we can continue working on such software safely?
it was explained to me why my concerns were wrong
Not sure if what I have in mind is the same, but I can think of scenarios where an explanation of how I'm wrong makes it feel like my concerns are being dismissed instead of being addressed. I'm guessing it's because a child's reasoning can seem illogical to an adult even though they actually make sense from the child's perspective, and it's upsetting when adults fail to acknowledge this.
Notice that jefftk is responding to the child from the child's perspective. The child thinks that there's not enough pasta, presumably because of what they can see from the serving bowl. jefftk shows the child the extra pasta in the kitchen (so the child can see that there's actually more pasta), thus addressing the child's concerns.
In contrast, one may answer from the adult's perspective instead. For example, they may say that there's enough because one serving of pasta is x grams and they made 10 servings when we have only 8 people. Or maybe they say that it's made by grandma who has lots of experience in estimating how much everyone needs. These make sense from the adult's perspective, but if the child doesn't really understand or trust the reasoning (e.g. because they don't have the concepts yet), then such explanations would feel more like dismissals of the child's concerns.
Not really a response, just something I thought of while reading this comment:
The obvious solution to people having different and unclear boundaries is to make those boundaries clearer, such as by asking for explicit consent, or by having a No-Prank List mentioned in johnswentworth's comment. Stating boundaries too clearly may lead to misuse though, but I suppose it does also make bad actors more obvious, because they can no longer hide behind the excuse of ignorance.
Nonetheless, even if we do somehow manage to convey most of our boundaries (e.g. via AR glasses), it would be highly unlikely that we'd be able to communicate all our boundaries all the time. Boundaries are sensitive to context and may change from moment to moment. We may not even realise where our boundaries lie until someone violates it. It would be impractical to find ways to make our boundaries clear enough that accidental boundary violations no longer happen. Worse still, if we managed to clearly communicate the simpler boundaries (where the consequence of violating boundaries are often lesser) but not the more complex boundaries (where consequences tend to be more severe), how would we get to practice negotiating ambiguous boundaries? There won't be any simple cases to safely experiment and learn from!
Thus, the more practical solution would be to improve people's abilities to negotiate ambiguous boundaries, such as the skills mentioned in Linda Linsefors' comment, or learning how to say no. Or say, learning to pay attention to your personal boundaries instead of just social boundaries. (e.g. if someone touches me in a way that makes me feel uncomfortable, I move away instead of staying still just because there's no social rule saying that it's wrong) Another useful skill would be finding ways to limit the consequences of having your boundaries violated (or finding ways to meet your needs without violating other people's boundaries). For example, informing your hosts beforehand that you are allergic to peanuts, or bringing earplugs to noisy places if you're sensitive to sounds.
I'd thought that how the No-Prank List and "welcomes hugs" stickers worked was by making boundaries clearer so people know what they're allowed to do and what they cannot do, but now it seems like their value lies more in how they limit the downsides of being wrong. Because you now know who doesn't want to be pranked, or who doesn't want unsolicited feedback, you can safely take action without fearing unacceptably negative consequences. Maybe someone likes being pranked in some ways but not others, and I use a prank they don't really enjoy. However, since they did not add their names to the list, it suggests that they think they will be okay with most pranks (even if they may not like it). The list doesn't ensure I never violate other people's boundaries; it makes it safer for me to explore.
I find the terminology confusing because asking for more "benign boundary violations" sounds like wanting strangers to do things that breach social boundaries that are not personal boundaries, yet the examples refer to friends and partners, not strangers. It doesn't make sense to say these are examples of "benign boundary violations" for close relationships though. Boundaries for friends are different for boundaries for strangers, so such behavior wouldn't be considered boundary violations.
I think of it differently: within any relationship, there is a space that you are generally allowed to explore without first asking for explicit consent. ("Allowed to explore" meaning that mistakes are tolerated.) You still need to negotiate your boundaries within this space, but it's done via informed guesses, non-verbal cues or slow escalation, rather than directly asking someone for their answer.
When someone tries an interaction (e.g. ruffling your hair), there are two levels to look at:
- Is it ok that they explored that interaction space, e.g. are you ok with them trying friendly physical touch?
- Are you ok with the action e.g. are you ok with having your hair ruffled?
Being too explicit when asking for someone's consent implies that you don't consider the action to lie within the permitted exploration space for the relationship, and therefore that you think that your relationship is more distant (like how you would preface a personal question with "Can I ask you a personal question?" for a stranger but not a friend). Daring to try something that violates social norms (e.g. ruffling someone's hair) implies that you think you are in a close enough relationship to justify the attempt, even if turns out that the other person doesn't like it. If it is indeed a close enough relationship, the other person can always accept the attempt while rejecting the specific action.
I think a typical way of handling individuals who have needs that are violated by social norms would carving out spaces for people with different needs, like having quiet carriages on trains, or providing vegetarian options on a menu. We can also be more accepting towards people who try to carve out their own spaces. For example, if someone needs alone time to recharge and thus chooses to sit separately from the group, the group accepts this rather than complaining about anti-social behavior.
A similar example: when you don't understand what someone is saying, it can be helpful to say "I don't understand. Do you mean X or Y?" instead of just saying "I don't understand". This way, even if X and Y are completely wrong, they now have a better sense of where you are and can thus adjust their explanations accordingly.
Just some thoughts I had while reading:
rule out everything you didn't mean
This reminds me of something I've heard -- that a data visualization is badly designed if different people end up with different interpretations of what the data visualization is saying. Similarly, we want to minimise the possible misinterpretations of what we write or say.
Each time I add another layer of detail to the description, I am narrowing the range of things-I-might-possibly-mean, taking huge swaths of options off the table.
Nice point, I've never really thought about it this way, yet it sounds so obvious in hindsight!
Choosing to include specific details (e.g. I like to eat red apples) constrains the possible interpretations along the key dimensions (e.g. color/type), but leaves room for different interpretations along presumably less important dimensions (e.g. size, variety).
I have a tendency to be very wordy partly because I try to be precise about what I say (i.e. try to make the space enclosed by the moat as small as possible). Others are much more efficient at communicating. I'm thinking it's because they are much better at identifying which features are more relevant, and are happy to leave things vague if they're less critical.
They don’t laugh nervously, don’t give tiny signals that they are malleable and interested in conforming to your opinion or worldview.
This sounds not-quite-right as pointed out by others, but I feel like I kind of recognize it. It's natural for people to adapt to others or be influenced by others, like shifting their accents, adjusting to others' preferred communication styles, or taking an interest in something because your friend is enthusiastic about it. It can be odd to meet people who don't do that. And if someone you interact with regularly shows zero inclination to being influenced/affected by you (for example when you speak they just pretend to acknowledge it and then proceed to ignore everything you said, brushing off your concerns without trying to explain it to you instead of e.g. trying to understand what you're saying or modifying their explanations based on your questions) or it happens only when they feel like it, then it can feel like you're not really a person, just an object or plaything in their world? It's not necessarily malicious though.
My current takeaway from this is to recognize that adjusting/adapting to others is a thing that people may or may not do, so we can notice when someone isn't reciprocating and decide on a response, rather than thinking the problem lies with us and continuing to make things worse.
A more general version would be that it is a choice to follow "social rules" like being polite, listening to your coach, or reciprocating when someone helps you. If the other party isn't acting in good faith (e.g. rude salesman, abusive coach, con man), then you can choose not to follow the rules (and deal with the consequences, whatever they may be).
Potentially related examples:
Group identity statements (pressure not to disagree)
- A team believes themselves to be the best at what they do and that their training methods etc. are all the best/correct approach. If you suggest a new training method that seems to be yielding good results for other teams, they wouldn't treat it seriously because it's a threat to their identity. However, if the team also takes pride in their ability to continuously refine their training methods, they would be happy to discuss the new method.
- If a group considers themselves to be "anti-pineapple" people, then saying "I like pineapples on my pizza" would signal that you're not really part of the group. Or maybe they think X is harmful and everyone knows pineapples contain X, then proudly declaring "I like pineapples on pizza" would mark you as an outsider.
Self-fulfilling prophecies (coordination + presure not to disagree publicly)
- It's the first week of school and the different student clubs and societies have set up booths to invite students to join. The president of club X tells you that they are the second largest club in the school. This makes club X seem like an established group and is one of the reasons you register your interest and eventually decide to join the group. Later on, you find out that club X actually had very few members initially. The president was basing his claim on the number of people who had registered their interest, not the actual members. However, since he managed to project the image of club X as a large and established group, many people join and it indeed becomes one of the largest student groups.
- A captain tells the team before a game that they are going to win. The team is motivated and gives their best, therefore winning the game. (Some people may know the statement is false, which they may reveal to others in private conversations. They won't state it in public because they know that the statement is intended to coordinate the team (i.e. it's not meant to be literally true) and that they are more likely to succeed if everyone believes it to be true. It is important only for those who think this is a factual statement and are likely to give up if it's false to believe the statement is true. People who think this is just a pep talk would assume that the captain will say the same thing regardless of what's true.)
- The Designated Driver Campaign successfully introduced the practice of having a designated driver when out for drinks by portraying it as a norm in entertainment shows. I'm not sure what it was like, whether people adopted the idea because they thought it was a good one even when they knew it was artificial or because it seemed like everyone was doing it, but here we have a social norm that existed only on TV that became an actual social norm because enough people decided to go along with it.
Group norms (coordination + disagreement is rejected)
- We use 2 whitespaces instead of 4 whitespaces for indentation as a convention.
- We act as one team. If we have come to a decision as a team, everyone follows the decision whole-heartedly even if they disagree. If you don't want to, please quit.
I guess I'd say frustrated, worried, confused. I was somewhat surprised/alarmed by the conclusion that Alec was actually trying to request information on how to be considered part of the group.
It seems to me like a rather uncharitable interpretation of Alec's response, to assume that he just wants to figure out how to belong, rather than genuinely desiring to find out how best to contribute.
We try to have a culture around here where there is no vetted-by-the-group answer to this; we instead try to encourage forming your own inside-view model of how AI risk might work, what paths through to a good future might be possible, etc.
I would be rather insulted by this response, because it implies that I am looking for a vetted-by-the-group answer, and also seems to be criticising me for asking Anna about opimal careers. Firstly, that was never my intent. Secondly, asking an expert for their opinion sounds like a perfectly reasonable course of action to me. However, this does assume that Alec shares my motivations and assumptions.
I'm not sure of my assumptions/beliefs/conclusions though. I might be missing context (e.g. I don't know what bay area is like, or the cultural norms), and I didn't really understand the essay (I found the example too distracting for me to focus on the concept of narrative syncing - I like the new examples much more).
From my perspective, Berry is helping cooperation with Alec by just making straightforward statements from Berry's own perspective; then Alec can compare what Berry says with what other people say and with what seems to Alec to match reality and be logically coherent, and then Alec can distinguish who does and doesn't have informed, logically coherent opinions refined through reason.
Ah yes agreed. Alec doesn't know that this is what's happening though (judging from his response to answer 1). Personally I'd default to assuming that Berry will play the role of expert since he's part of CFAR while I'm just a random computer scientist (from Berry's perspective). I would switch to a more equal dynamic only if there's a clear indicator that I should do so.
For example, if a student asks a professor a question, the professor may ask the student for their thoughts instead and then respond thoughtfully to their answer, like they would respond to a fellow professor. Or if a boss asks a new subordinate for their opinion but the subordinate thinks this is a fake question and tries to guess the cryptic instructions instead (because sometimes people ask you questions to hint that you should answer a certain way), the boss may ask someone else who's been on the team longer. When the new member sees the senior member responding honestly and boss engaging thoughtfully with the response, then the new member would know that the question was genuine.
In the careers example, I can't tell from the first answer that Anna is trying to engage me as a peer. (I'd imagine someone used to different norms might assume that as the default though.)
Hmm if CFAR organizes a workshop, then I would think it is reasonable to assume that the CFAR staff (assuming they are introduced as such) are there as experienced members of the AI risk community who are there to offer guidance to people who are new to the area.
Thus, if I ask them about career paths, I'm not asking for their opinions as individuals, I'm asking for their opinions as people with more expertise than I have.
Two possible motivations I can think of for consulting someone with expertise would be:
-
In school, I ask the teacher when I have a question so they tell me the answer. Likewise, if I'm not sure of the appropriate career move, I should ask an expert in AI risk and they can tell me what to do. If I follow their answer and it fails, then it's their fault. I would feel betrayed and blame them for wasting my life.
-
Someone who has been working on AI risks would have more knowledge about the area. Thus, I should consult their opinions before deciding what to do. I would feel betrayed if I later find out that they gave me a misleading answer (e.g. telling me to study ML when that's not what they actually think).
In both cases I'm trying to figure out what to do next, but in the first case I want to be told what to do next, and in the second case I just want more information so I can decide what to do next. (Expert doesn't mean you bear responsibility for my life decisions, just responsibility for providing accurate information.)
If I'm not wrong, you're asking about the first case? (My comment was trying to describe the second scenario.)
If it's just a matter of ambiguity (e.g. I will accompany you just for moral support vs accompany you to provide advice), I would just state it explicitly (e.g. tell you "I won't be helping you. If someone asks you something and you look to me for help, I'll just look at you and smile.") and then do precisely what I said I'll do.
Otherwise, if it's a mindset issue (thinking I should do what others tell me to), it's a deeper problem. If it's someone I'm close to, I would address the issue directly e.g. saying "If the GPS tells you to drive into a lake and you listen and drown, whose problem is it? I can tell you what to do, but if I'm wrong, you're the one who will suffer, not me. Even if some very wise person tells you what to do, you still have to check if it makes sense and decide if you will do it, because you have the most to lose. If other people tell you the wrong thing, they can just go 'Oops sorry! I didn't realise.' You, on the other hand, are stuck picking up the pieces. Make sure you think it is worth it before implementing anyone's advice." And if that fails, just... let them make their mistakes and eventually they'll learn from their experiences?
In the case of alignment stuff, no one knows how to cook, so the version of this that's available is "try to figure the whole thing out for yourself with no assumptions", which was part of Answer 1.
Not really, saying "I don't know" is very different from saying "after years of research, we still don't know".
"I'm not sure" sounds like the kind of answer a newbie would give, so I'm not really learning anything new from the conversation. Even worse, "don't know" sounds like you don't really want to help me - surely an expert knows more than me, if they say they don't know, then that must mean they just don't want to share their knowledge.
In contrast, if you said that no one knows because the field is still too new and rapidly changing, then you are giving me an informed opinion. I now know that this is a problem even the experts haven't solved and can then make decisions based on this information.
I don't really understand... Suppose I am a computer scientist who has just learned about AI risks and am eager to contribute, but I'm not sure where to start. Then the natural step would be to ask someone who does have experience in this area for their advice (e.g. what careers should I go into), so I can take advantage of what's already known rather than starting from scratch.
My surface question is about careers, but my true/implicit question is actually asking for help on getting started contributing to AI risk. It's not about wanting to know how to be considered a real member of the group? I would be annoyed by the first answer because it answers my superficial question while rebuffing my true question (I still don't know what to do next!). I am asking you for help and advice, and yet your response is that you have no answer. I mean it is technically true and it would be a good answer for a peer, but in this context it sounds like you're refusing to help me (you're not playing your role of advisor/expert?).
Answer 3 is good because you are sharing your experience and offering help. There's no need to make reference to culture. I would be be just as happy with an answer that says "no one really knows" (instead of "I don't know"), because then you are telling me the current state of knowledge in the industry.
A similar example:
When I asked someone who was teaching me how to cook how much salt I should add to the dish, they answered "I don't know". I was annoyed because it sounded like they were refusing to help me (because they're not answering my implicit question of how to decide how much salt to put when I cook next time) when they were the expert. It would have been better if they'd said that the saltiness varies based on the type of salt and the other ingredients and your own preferences, etc. so it's not possible to have a known, fixed amount of salt. What cooks do is that they try adding a bit first (e.g. half a teaspoon) then taste and make adjustments. Nowadays I would know how to rephrase my question to get the answer I want, but I didn't use to be able to.
Oooh, reminds me of a passage:
Warning: also contains spoilers for
Wizard of Oz
Excerpt (after the Wizard has revealed himself to be a fraud):
"I think you are a very bad man," said Dorothy.
"Oh, no, my dear; I'm really a very good man, but I'm a very bad Wizard, I must admit."
"Can't you give me brains?" asked the Scarecrow.
"You don't need them. You are learning something every day. A baby has brains, but it doesn't know much. Experience is the only thing that brings knowledge, and the longer you are on earth the more experience you are sure to get."
"That may all be true," said the Scarecrow, "but I shall be very unhappy unless you give me brains."
The false Wizard looked at him carefully.
"Well," he said with a sigh, "I'm not much of a magician, as I said; but if you will come to me tomorrow morning, I will stuff your head with brains. I cannot tell you how to use them, however; you must find that out for yourself."
"Oh, thank you--thank you!" cried the Scarecrow. "I'll find a way to use them, never fear!"
"But how about my courage?" asked the Lion anxiously.
"You have plenty of courage, I am sure," answered Oz. "All you need is confidence in yourself. There is no living thing that is not afraid when it faces danger. The True courage is in facing danger when you are afraid, and that kind of courage you have in plenty."
"Perhaps I have, but I'm scared just the same," said the Lion. "I shall really be very unhappy unless you give me the sort of courage that makes one forget he is afraid."
"Very well, I will give you that sort of courage tomorrow," replied Oz.
"How about my heart?" asked the Tin Woodman.
"Why, as for that," answered Oz, "I think you are wrong to want a heart. It makes most people unhappy. If you only knew it, you are in luck not to have a heart."
"That must be a matter of opinion," said the Tin Woodman. "For my part, I will bear all the unhappiness without a murmur, if you will give me the heart."
"Very well," answered Oz meekly. "Come to me tomorrow and you shall have a heart. I have played Wizard for so many years that I may as well continue the part a little longer."
Hmm, "slipperiness" sounds like a concept that's more useful for observing my thoughts, rather than a way to think about my interactions with others.
I can tell when my mind is trying to gloss over something (e.g. when I don't feel like specifying my entire chain of thought, then when I do try writing out a proper explanation, I start seeing loopholes and finding counter examples. Or when I dismiss something too quickly or for no reason.).
However, if I sense that someone's not fully engaging with my points, how do I know if that's because I've misunderstood them, or because they have knowledge they're unable to articulate, or because there's motivated reasoning at play etc.?
If I can't tell, then I'd think it makes sense to treat it as a general case of communication failure instead of a specific case of "slipperiness", i.e. I would try to understand what they are saying, find and ask about the apparent contradiction stated from their perspective, and observe their response.
If I do manage to identify it as one of the "slipperiness" cases, then different examples would require different approaches (e.g. disengage for manipulators, give space to calm down for emotional outbursts), so it wouldn't make sense to treat them as a general "slipperiness" case?
This sounds like an interesting idea, it would be nice to have a tool that reminds us to be our better selves. However, it seems like it'd be quite problematic to let GPT mediate our communications. Besides the issues discussed in the other comments, I also think letting GPT handle the translation encourages a more superficial form of NVC. I think the main reason NVC works is not that the framework itself lets people communicate using non-violent language. Rather, it works because using the framework encourages people to be more emotionally aware and empathetic.
As a sender, NVC helps me because:
- It makes me more aware of the inferences I am making (which may be false) based on my observations (which may not be complete). It reminds me that what I perceive is only my interpretation, which may not be true!
- It helps me pay more attention to what I'm feeling and what I'm lacking, so I can focus more on problem solving (i.e. finding ways to meet my needs), rather than emotional arguments.
- Note that sometimes, cooperative problem solving is not our goal, in which case we may not want to use NVC.
As a listener, NVC helps me because:
- It reminds me that people usually have a valid reason why they are upset, which helps me be more empathetic. For example, I used to think people were angry at me because I didn't do what they say, but later realised it was because they genuinely found my actions to be upsetting (e.g. they may find messiness to be distracting, whereas I don't notice it).
- Note that this is not always true, such as when someone is saying something to manipulate you.
My concerns with using GPT to handle the NVC translation are as follows:
- I suspect an NVC translator would encourage people to "outsource" this thinking to GPT. This would lead to people following the framework without genuinely going through the process of thinking through their (or their partner's) emotions and needs. This means people don't really get to practise the NVC skills, and so don't truly benefit from NVC.
- Knowing that others may be using a translator may also make the conversation feel less genuine because it becomes easy to fake it (e.g. maybe they are actually really mad at me and can't be bothered to make the effort to go through the NVC thought process, and are just using the translator because they believe it will get a better response).
- When it's presented as a translation, it gives the impression that the translation is the real answer, rather than just one of many possible answers.
(Also, I think some of the examples reflect a different problem that won't be solved by NVC translation. For example, I think a better approach for the covid vaccine example would be asking the other person why they believe what they believe, rather than asking them to read the research with you.)
The idea of a tool to help open up the conversation is interesting though, so here are two possible alternatives I can think of (which may not be feasible):
- Instead of translating your words, the app detects messages with strong negative emotions, and prompts you to reconsider when you send it, similar to how Outlook reminds you if you mention "attachment" in your email but don't attach any files. This should be something that is enabled by the user, so it's like a reminder from our calmer selves to our more impulsive selves to think through before sending.
- Instead of providing a single translation, the app suggests possible feelings/needs based on other people's experiences, while making it clear that yours may be different. For example, "Sometimes people say X because they are feeling Y1 and want Z1, or because they are feeling Y2 and want Z2. Do you feel like this applies to you? Or maybe you're feeling something else?"
I shared a similar experience reading this essay and wanted to figure out why, so I've tried writing out some of my observations/experiences, hopefully they'll help in some way?
Before I start, I'd just like to add that I enjoyed this essay. It raises a lot of interesting points that provide food for thought e.g. uncertainty about location of hazards is what causes contraction, people can also be posts, how fear and eagerness are trying to protect the same thing. And the illustrations are pretty and helpful!
Below are my observations from reading the essay. They are my own personal experience, which may be very different from others' experiences! Many things are obvious to others but not to me, so it might just be a me-not-understanding, rather than an issue with the writing.
Anyway, here's the list:
Insufficient explanation
There seem to be two forms of meadow theory I can read from the essay. I understand and agree with the weak form, but the essay seems to be claiming the strong form without providing much explanation. (The strong form seems possibly true when I think about it, but it's not obvious to me from the essay.)
- Weak form: (my interpretation of the essay, which may be different from the author's original intent...)
- In scenarios where we want everyone to be able to explore freely despite the presence of unknown hazards, people think we should (primarily) help others by trying to remove all hazards, or following them around to stop them from getting into dangerous situations. In other words, we are trying to eliminate danger (that we perceive) from other peoples' experiences.
- This works in certain situations (e.g. baby-proofing a room), but tends to be unrealistic and unsustainable as a general, long-term solution.
- Observe that the main problem is not the fact that there are hazards, but that we become constrained by our fear of getting hurt because we are uncertain of where the dangers lie.
- Thus, a better approach would be to help them learn to work in an environment with unknown hazards, by helping them figure out where the hazards are, and teaching them how to identify potentially dangerous areas and communicate such information to others (hazards here includes people's boundaries, fences, wants, and needs).
- Strong form: (my attempt to follow the original essay, but I don't really understand it)
- The job of a parent (in the meadow) is to navigate the child-post interaction.
- Similarly, the job of parents, managers, generals etc. in the real world is managing how their people handle hazards as they explore? (Why? Any examples other than parenting?)
- When you know there is a hazard but are not sure where it is, you undergo a contraction because you are afraid of getting hurt.
- Contraction is bad. (I think I agree, but why? Always bad or bad in certain contexts?)
- Thus, the main responsibility of parents (and any cooperative individual) is to help their child/team etc. locate the hazards or identify potentially dangerous areas, so they can remain relatively expansive. (Main responsibility with respect to helping people stay safe or main responsibility in general?)
If the intent of the essay is to convey the weak form, then the essay seems to make unnecessary unjustified claims. This is distracting, because I keep trying to check if I agree or disagree with each claim (because it is not immediately obvious if the statement is true or false) when they aren't important for understanding the main point. This makes it harder to focus on the core idea.
However, the essay seems to be arguing for the stronger form. In that case, the essay doesn't seem to be providing enough explanations. Instead, the reader has to find justifications for the claims, so that they can understand and make use of the theory.
Example:
...it is the claim of this theory and this philosophy that (undergoing a contraction) is bad.
It is not clear why I should agree that this is bad (especially when the essay states that running is a metaphor for human activity, which means that I don't just have to agree that contracting in this example is bad, but that contracting in all human activity is bad).
It is immediately obvious to me that in scenarios where we want to explore as much of the meadow as possible, undergoing a contraction would be bad, because then we would be able to explore less space within the same amount of time.
However, I don't immediately see why undergoing a contraction is bad in general. The reader seems to be expected to simply agree, or to find our own justifications for why this may be true. I would have expected the essay to at least provide the motivation behind the claim, such as providing examples of where this fear-driven contraction has led to negative consequences.
Meadow example is introduced as a metaphor
The essay presents the meadow example as a metaphor immediately, instead of first trying to explain the meadow example, then showing how real life situations are similar to the meadow example.
I think this may contribute to the feeling of being "yanked", because the reader is not given time to understand the example first, before seeing how it relates to their life. Instead, the reader is instructed to view real life (e.g. human activity) via a very specific lens (e.g. running in a meadow), so now I am trying to understand the example while trying to avoid being constrained by the lens that the author provides, all the while trying to figure out what "human activity" might refer to.
Meadow metaphor is very broad
Running in a meadow represents "human activity", but "human activity" is so general that I don't have a concrete way of understanding the metaphor. It also makes it more overwhelming because then any argument I evaluate has to apply to all possible human activity, rather than just a specific scenario. It feels a bit like we're asked to agree or disagree with an entire worldview/life philosophy (our main job when helping others in any scenario is to help them locate hazards), rather than agree or disagree with a specific claim in a specific context (when we want people to be able to freely explore a space that has unknown hazards, it is better to help them locate hazards), when the arguments only cover a specific context (parenting).
Parenting appears in both the metaphor and the example/application
I find it confusing that the parent appears in both the metaphor (parent in the meadow) and the application (parenting in general).
Example:
...the job of the parent is to somehow navigate the child-post interaction.
Meadow-parent or real parent? The paragraphs building up to this statement show how this is true for the parent in the meadow, but don't provide support for the broader claim that this is true for parenting in general. If I want my child to explore freely in a meadow, then my job is to navigate the child-post interaction. But it is not obvious to me that the main concern of a parent is always to ensure that their child is able explore reality freely.
Using a parent in the meadow example brings in extra connotations
A parent-child relationship has a lot of connotations (which can vary based on culture and personal values and experiences). By using a parent in the meadow example, it seems to suggest that this relationship is core to the metaphor. This seems to give the metaphor more "baggage", making it harder to see how it relates to other scenarios.
For example, when I try to see how it relates to project management, I keep getting distracted by the fact that my relationship with my project manager is very different from my relationship with my parent. My parent was responsible for me in a way that my project manager isn't. My parent knew a lot more than me, yet I can see many hazards that my project manager can't. My parent wanted me to explore, but my manager wants us to move in a specific direction.