Posts
Comments
You inspired me to write this up over at EA forum, where it’s getting a terrible reception :D All the best ideas start out unpopular?
I wouldn’t be surprised if a lot of EAs see my takes here as a slippery slope to warm glow thinking and wanton spending that needs to be protected against.
I didn't have this reaction at all. The four lessons you present are points about execution, not principles. IMO a lot of these ideas are cheap or free while being super high-value. We can absolutely continue our borg-like utilitarianism and coldhearted cost-benefit analysis while projecting hospitality, building reputation, conserving slack, and promoting inter-institutional cooperation!
But I do think they'll require an EA spin. For example, EA can't eschew high-value cause areas (like X-risk) because it would look weird to be associated with them. But we can and should take reputation into account when selecting interventions (i.e. we should have weighed the benefits of a chance at getting an EA-aligned congressman with the reputational risk that stemmed from putting millions of cryptobucks into a congressional election, not that realistically we had any control over SBF's actions or identity as an EA).
For hospitality, I think one thing EAs can do is to distinguish the "controlling reason" we do an intervention vs. the "felt reason" we do it. What do I mean by that? An EA may choose to donate to Against Malaria Foundation for coldhearted cost-benefit analysis reasons. But that EA can also have other motivations, feelings and values alongside the analysis - being able to tell a visceral, vivid, felt story about why they personally feel connected to that cause is a way to come across as not borglike.
We can donate a little money locally just to project warmth and connection to the people around us, because we do believe in helping locally - we just try to prioritize helping globally even more. But if people are concerned that we've shut off our compassion and feel alienated from EA on that basis, this is a way we can counteract that impression in a way that might even help improve EA engagement, since it's honestly a little difficult to relentlessly reject local appeals for aid in order to give 100% of your charity to EA causes. Like, donate 9% of your income to EA-aligned charities and 1% of your income to local charities. If you make $80,000/year, that's still $800 and an average American's annual charitable donation on its own. And now, instead of the story being "you give zero dollars to local charities so you can do borglike optimization for X-risk-related donations" the story can be "you give as much as the next person to local charities, while also donating a very substantial portion of your income to X-risk-related charities."
To me this just seems like the same line of thinking that leads us to limit the EA donation appeal to 10% of the typical person's income, instead of demanding that people donate until they're living like the global poor. We relax the demands we make on our members in order to make our movement human-compatible. Encouraging a fraction of EA donations to be local or warm-fuzzy-optimized is another way of being human-compatible while still doing a huge amount of good.
The most common anti-eugenics stance I encounter is also opposed to epilogenics. From this point of view, parents choosing to select for desirable traits in their offspring using advanced medical technology is wasteful, immoral and gross. They have roughly the same feelings about epilogenics (including for height) as they have about cosmetic plastic surgery. To them, a natural and traditional trajectory of healthy human lifespan is ideal - we should maintain our health via diet and exercise, try not to care too much about superficial traits like appearance or intelligence, then die in our 80s or so.
I think that doctors and patients that it’s good to take fuller control of one’s own body (and to influence their children’s bodies) via advanced med tech is the main hurdle to promoting an acceptance of epilogenics.
I think that if there is an objective morality, then you can use your concern about self-congratulatory narratives as a starting point. What moral view is leading you to think there’s any problem at all with enjoying a self-congratulatory narrative? Once you’ve identified it, you can figure out what other moral positions it might imply.
Even that .69%-acceptable statistic may be a political maneuver. I found a meta analysis a year or two ago of AI healthcare diagnostics that found about this level of acceptability in the literature.
Where it becomes political is that a prestigious doctor friend unsympathetic to AI diagnosis used this statistic to blow off the whole field, rather than to become interested in the tiny fraction of acceptable research. Which is political on its own, and also has to make you wonder if researchers set their quality bar to get the result they want.
Nevertheless it IS discouraging that about 276/40000 papers would be acceptable.
I think it's a complex question. For example, people debate whether porn is harmful or helpful:
- Morally or practically
- In the short vs. long term
- To the actors
- Directly to the viewer
- To the viewer's partner
- To culture as a whole
- For intrinsict reasons or because of how it intersects with the rest of our culture
- Universally, on average, or in specific circumstances
- Whether it's actually harmful/helpful or just a suboptimal/better way to express sexuality
If you get specific enough about these questions, it may be possible to ask meaningful scientific or moral questions. When I've seen debates over porn seem productive, it is usually because the participants have stopped generalizing and tried to get really specific on what exact question they're asking, why, and how.
But this in turn poses a new problem: how would you figure out which bits of this debate are relevant to you? And once you have an answer to this question, you may find that there really isn't much reliable information out there that's pertinent to you.
But if you just want to trawl through the scientific literature, I would just approach it with the same open-minded skepticism you'd bring to any other such project.
Yes, I agree that if "practical problem in your life" did not include "looking good" or "goes with my other clothes" as design parameters then you'd probably end up in a situation like that. I succeeded at avoiding this problem because I specifically set out to find pants that were good for biking and looked like professional work pants (fortunately I already had some that did). This can be useful: it puts a sharp constraint on the shirts I buy, requiring them to look good with these specific pants. That limitation can be helpful in making the overwhelming number of choices manageable.
I agree with the perspective you're laying out here. These days, I take a slightly more concrete approach to choosing my wardrobe. It still fits the perspective, but the thought process is different.
To decide what to buy, I think about a specific purpose in my life for which I need clothes, and I try to get as specific as possible.
For example, I just started a new job, and I wanted to buy some new clothes for it. Because I already had plenty of suitable shirts, I started to think about the requirements for optimal pants for this application.
- I bike to work, so I wanted pants that are:
- Lightweight and moisture-wicking
- Don't get caught in the bike chain
- I don't make a lot of money, so I wanted pants that were < $40
- I only feel comfortable in earth tones
- I wanted multiple colors
- I wanted pants that looked professional
I figured that there must be some sort of athletic pant in a professional-looking cut, and as it turns out, there are! I found what I needed at Nordstrom Rack and they worked out great for me.
The key here, however, is that I didn't start by thinking about abstract qualities of ideal pants (i.e. by listing a set of attributes along which pants can theoretically be optimized). I started by thinking about a practical clothing problem in my life, then imagined the abstract qualities of pants that would make them great solutions to this practical life problem. Then I went and found real-world versions of those pants (and some moisture-wicking underwear and socks as well to complete the bike-communting-friendly wardrobe).
Likewise, I recently considered how to deal with shoes for bike commuting in winter. My shoes might get soaked on the way to work. Yet my feet tend to overheat and get sweaty and itchy over the course of the day. I didn't want to carry pairs of shoes back and forth.
At first I considered waterproof sneakers like Vessis, but I found that water can sneak in through the top, and they are very hot on your feet. Then I considered biking sandals, while keeping a foot towel as well as socks and shoes at work. Then I realized that there are such things as shoe dryers, so I can potentially wear breathable shoes, put them on shoe dryers at work, keep a second pair of at-work shoes and some socks to change into, and change back into the dried-out bike shoes on the way back. This lets me get whatever biking shoes or sandals seem like they'd be most comfortable in a particular season, while picking shoes that are optimized for comfort or style at work.
I understand your point is that material circumstances control the moral ideas prevalent in a culture, and that these cultural ideals in turn control individual beliefs and actions. Our morality and that of our ancestors is therefore determined largely by material circumstances.
Alongside this deterministic framework, you are arguing for a Dawkins selfis meme-based explanation for which cultural ideas survive and flourish. Specifically, you are arguing that historical material circumstances favored the survival of a pro-slavery, pro-war morality, while modern circumstances favor survival of an anti-slavery, minimal- or anti-war morality. Which view we hold is an accident of birth, and knowing this, we should treat our moral views with some skepticism and cynicism. Under this relativist perspective, being pro-slavery is actually an equally valid moral stance as being anti-slavery.
I think your post could have benefitted by an explicit consideration of whether or not there are objective moral truths that we can uncover. It's certainly scary in some sense if an accident of birth can cause us to miss out on objective moral truth. Missionaries used to travel the globe to tell poor benighted tribes about Jesus and solve this exact problem. I'm an atheist, so I think they screwed up figuring out "objective morality." But I remain sympathetic to the idea that there are objective moral truths to be discovered. So what's scary isn't necessary that there is no morality and all things are relative. It may be that there is morality and we're screwing it up!
From that point of view, cynical skepticism toward one's own moral view doesn't seem to me the proper attitude. Neither does being scared of how materiality shapes one's morality: historically, you were much more likely to have been born a slave than a slaver. While it would have been terrible to be a slave, from a moral perspective, you're relatively safe from accidentally having been born into a pro-slavery worldview.
As an alternative to cynicism and fear, I prefer sincere curiosity about how to think about morality and a healthy appreciation for how materiality and culture shapes our worldview.
Many commenters seem to be reading this post as implying something like slavery and violence being good or at least morally okay... I read it as a caution similar to the common points of "how sure are you that you would have made the morally correct choice if you had been born as someone benefiting from slavery back when it was a thing" combined with "the values that we endorse are strongly shaped by self-interest and motivated cognition"
I don't agree with your characterization of the post's claims. The title is synonymous with "morality is arbitrary virtue-signaling," and it promotes a cynical attitude toward moral argument in general. There is nothing wrong with having a cynical attitude, provided it's useful and/or correct. Is there reason to believe such cynicism is in fact useful or correct? The post doesn't promote moral cynicism by considering more sincere moral perspectives, falsifying them, and then promoting a cynical approach as a better alternative. It promotes cynicism by painting an evocative series of images that encourage us to cast ourselves as George Washington (rather than as one of his 150 slaves) or as a powerful Bronze-age warrior (rather than as one of the women he just raped and murdered in his latest town-sacking), and thereby take a cynical stance toward our own modern-day morality because it's just an arbitrary material fact that we were born as ourselves rather than as slaveholders and city-sackers.
I do think that characterizing morality as a way of promoting attitudes that lead to net better outcomes for their subscribers (despite short-term temptations) is a pretty good one. And it would therefore be true that morality is governed by materiality. But materiality is anything but arbitrary and self-congratulatory. It's as objective as the ratio of slaves-to-Washingtons and peasants-to-knights. If morality is about living well within our material conditions, then morality is mainly objective, asking questions like "given my resources and abilities, what is the best-in-expectation way for me to live in modern society?" There are other questions one could ask that might lead to different answers and actions, such as "what's the best way to improve the state of the world for my grandchildren?" How we prioritize these questions is probably more subjective. But the answers to the questions is in theory mostly objective.
From this point of view, the idea that we'd have been pro-slavery had we been born into a slaveowning family should no more disturb us than the idea that we'd have been flat-Earthers if born in a place and time when the Earth was widely considered to be flat.
Based on the evident historical record, without the environmentally deleterious bounty fossil fuels facilitated, most of us would be conjuring up creatively compelling excuses for why forcing your neighbor to work for free is the Moral thing to do.
I can't speak to every era, but in the middle ages, about 75% of us would have been serfs: not tradeable individually, but bound to a plot of purchaseable land. No way most of us would have been spending our time innovating arguments for the moralilty of slavery.
Arguments for the morality of slavery come down to us from the words and images of the past. Who made the words and images that survive? Mainly slaveholders and serf-owners. As you point out, those who didn't subscribe to a pro-war, pro-slavery morality selectively died out or became enslaved until the economy made the institution of slavery obsolete, and they lost the ability to transmit their moral views into the future.
Another UI note - the scrollbar is so thin it is hard to use.
I'm honestly not sure if this system would be:
- Harmful, mostly replacing high-quality comments with modest-quality reacts
- Very harmful, with interest in the site draining away as commenting becomes abnormal
- Helpful, with silence or low-quality comments (which could include inflammatory comments) replaced with modest-quality reacts
- Very helpful, as the continuum of ability to engage escalates people into interactions they'd otherwise have skipped, as authors see that apparently unseen comments actually have a lot of eyeballs on them, and leading to a positive feedback loop in which engagement leads to more engagement
- Neutral, with these trends in balance, probably in some sort of complicated manner I can't foresee
I think any truly bad effects on the site would take place over the long run, and I think we could learn a lot by experimenting with it about whether it seems good or bad, so I tentatively support an experimental rollout.
Just noting a point of confusion - if changing minds is a social endeavor having to do with personal connection, why is it necessary to get people to engage System 2/Central Route thinking? Isn’t the main thing to get them involved in a social group where the desired beliefs are normal and let System 1/Peripheral Route thinking continue to do its work?
I would pay about $5/month for a version of Twitter that was read-only. I want a window, not a door.
And I’m not sure about the scales being an icon for “seems borderline.” Some sort of fuzzy line or something might be more appropriate. Scales make me think “well measured.”
The support icon looks at first glance like a garbage can although I can tell it’s meant to be a pillar.
I think with this system you will end up with too many large difficult and uncatchy jumps. Plus similar phone numbers will sound similar which is not what you want.
How does that work with 10 available digits and only 7 scale notes? Do three digits become accidentals or something?
I did this for a while, but then returned it and just started opening the windows more often, especially when it felt stuffy.
Steelman as the inverse of the Intellectual Turing Test
The Intellectual Turing Test (ITT) checks if you can speak in such a way that you convincingly come across as if you believe what you're saying. Can you successfully pose as a libertarian? As a communist?
Lately, the ITT has been getting boosted over another idea, "steelmanning," which I think of as making "arguing against the strongest version of an idea," the opposite of weakmanning or strawmanning.
I don't think one is better than the other. I think that they're tools for different purposes.
If I'm doing the ITT, I'm usually trying to build empathy in myself for a different perspective, or build trust with the person I'm talking with that I grok their point of view. It's for when I do understand an argument intellectually, but need to demonstrate to others that I also understand it emotionally and rhetorically.
If I'm steelmanning, I'm usually trying to build an intellectual appreciation for a point of view that seems foolish, but is held by somebody I respect enough to take seriously. I'm trying to do so for my own sake, in the hope that I might learn something new from the attempt.
I came across GreyZone Health today, thought it might be relevant:
GreyZone Health
Hope for Difficult to Diagnose, Rare, and Complex Medical Conditions
Facing a Misdiagnosis, or Having No Diagnosis at All?
With our exceptional patient advocate service, GreyZone Health helps patients like you with difficult to diagnose, rare, and complex medical conditions. GreyZone Health finds answers and improves your quality of life. Based in Seattle, Washington, our professional patient advocates serve patients around Washington state and around the world, both virtually and in person.Get Help with Health / Patient Advocates
If you are struggling with persistent health symptoms and/or you are having a hard time managing your complex medical situation and need patient advocacy, we are here to help!
My suggestion would be to start by focusing on hypotheses that your illness has a single cause that is short-term, like a matter of minutes, hours, or at most a day. And also that it’s reliable - do X and Y happens, almost every time. These assumptions are easiest to rule out and do not require elaborate tracking. You may also want to focus on expanding your hypothesis space if you haven’t already - food, exercise, sleep, air quality, pets, genetic and hormonal issues, and chronic infections, are all worth looking at.
As you noticed, testing more complex hypotheses over long time scales makes the process of gathering evidence more costly and slow, and the results become less reliable due to the risks of confounding and the number of post-hoc tests you will be running.
This is a staged prompt, with the first stage initiating the conversation and stages 2 and 3 coming after GPT-4’s first and second replies respectively.
First stage:
You are a science fiction writer giving instructions to a genie to get it to write you a science fiction novel capable of winning the Hugo award. However, you know that genies often misconstrue wishes, so your wish needs to be detailed, conceptually bulletproof, and covering all facets of what makes a science fiction novel great. You also only get three wishes, so it has to be a good prompt. First, simulate the science fiction author’s thought process in coming up with a good prompt. Then have them make a wish to the genie for the sci fi novel using the prompt.
Second stage:
Now, have the genie write the first chapter of the novel based on the science fiction author’s wish.
Third stage:
have the genie continue writing chapter 1 to the end of the chapter.
Story (first try, no edits. prompt in a reply to this comment)
Chapter 1: The Last Sunrise
The horizon wore an orange-red hue, a token of farewell from the sun. It was the last sunrise Jonas would ever witness, the last that his biological eyes would capture and transmit to his fleshy, mortal brain. Tomorrow, he would wake up inside a machine.
A sigh escaped his lips, a whisper in the morning air. He sat on the edge of the roof, feet dangling four stories above the city, staring at the kaleidoscope of colors. The city was waking up, the sounds of the waking world quietly drifting upward.
“Beautiful, isn’t it?” The voice came from behind him. It was soft, feminine, and familiar. He didn’t need to turn around to know that it was Ada.
He nodded, swallowing the lump in his throat. “Yeah, it is.”
Ada moved next to him, her dark eyes mirroring the hues of the sunrise. She was an Original, like him, a decision she wore proudly. Her unaltered body was a testament to their shared values - values he was about to betray.
“Are you sure about this, Jonas?”
He let out a bitter laugh. “It’s not like I have a choice.”
“Everyone has a choice.” Ada’s voice was firm. She had always been strong, unwavering in her principles.
“Some choices are luxuries, Ada,” Jonas countered, his gaze never leaving the sunrise. “And I can’t afford them anymore.”
Ada was silent for a while. When she spoke again, her voice was soft, barely above a whisper. “But this...this is forever, Jonas. Once you Transfer, there’s no coming back. You'll be leaving everything behind.”
“I know.” His voice was thick with emotion. “But I can’t stay, Ada. Not like this.”
The silence stretched between them, filled only with the sounds of the waking city below. They sat there together, two silhouettes against the backdrop of the last sunrise, two humans on the edge of a divide that was about to grow infinitely wider.
“I’ll miss you, Jonas,” Ada murmured. “The real you.”
Jonas turned to her, his gaze softening. “Who’s to say what’s real, Ada? Maybe...maybe this is just another way to be human.”
Ada didn’t respond, her gaze locked with his. The sun had begun to climb higher, its warm rays casting long shadows in the city below.
As the last of the sunrise faded away, Jonas reached for Ada's hand, holding it tightly. He wanted to remember the feel of her skin, the warmth of her touch, the pulse of life beneath her fingertips. He wanted to remember what it was like to be human.
Because tomorrow, he wouldn't be human anymore. Tomorrow, he would wake up inside a machine. And he had no idea if he would ever truly see a sunrise again.
Chapter 1: The Last Sunrise (Continuation)
Jonas stood up, his gaze lingering on the cityscape that stretched out before him. The skyline was a mixture of old and new, buildings of brick and concrete standing alongside those made of sleek glass and metal.
His eyes traced the silhouette of the building where the Transfer would take place – a monolith of chrome and glass that pierced the azure sky. It was a testament to mankind's triumph over the limitations of biology. It was also a tombstone, marking the death of human fragility and the birth of something new, something immortal.
The thought sent a shiver down his spine.
Ada stood up next to him, her hand still in his. "Promise me something, Jonas."
He turned to look at her. "What?"
"Promise me you won't forget. Don't forget who you were, who we were...together."
Jonas nodded, squeezing her hand. "I promise, Ada."
As they descended from the rooftop, he found himself cataloging everything. The roughness of the concrete stairs beneath his bare feet, the smell of the city - a mix of exhaust fumes, fresh-baked bread from a nearby bakery, and the distant scent of the sea.
He wanted to remember it all, capture every sensation and lock it away in his memory. He was, in essence, experiencing his last day as a human.
In the apartment, Jonas moved methodically. He dressed, choosing a plain white shirt and dark jeans. He packed a small bag with his essentials - not that he would need them, but the act itself was comforting, a semblance of normality on a day that was anything but.
Before leaving, he paused in front of the mirror. The man staring back at him was familiar, yet alien. His brown eyes were wide and bright, his skin smooth, the scar on his chin a reminder of a childhood fall. Tomorrow, he would have a new face, a new body. The thought was both terrifying and thrilling.
Finally, he left the apartment. The city was fully awake now, bustling with life. Cars zoomed past, people hurried down the sidewalks, each absorbed in their own world. To them, it was just another day. But to Jonas, it was the end of everything he knew.
The ride to the Transfer facility was a blur. Jonas barely registered the scenery passing by. His thoughts were consumed by what lay ahead. The car pulled up in front of the monolith he had seen from the rooftop. The stark reality of his decision hit him again, a tidal wave of fear and anticipation.
Inside the building, everything was clean and clinical. The receptionist, a Transfer with skin that shimmered like liquid silver, checked him in with a synthetic smile. As she handed him a data pad with the consent forms, he couldn't help but stare. She was what he would become.
Signing the forms, Jonas realized that he had crossed the point of no return. He was led down a corridor to a room that was stark white, filled with advanced machinery that hummed with quiet efficiency.
"Please lie down, Mr. Jonas," a technician, another Transfer, instructed, gesturing to the bed-like platform at the center of the room. "The procedure will begin shortly."
Jonas obeyed, his heart pounding in his chest. As he lay there, staring at the ceiling, he thought of the sunrise, of Ada, of the world he was leaving behind.
The last thing he remembered was the technician's voice, soft and soothing. "Don't worry, Mr. Jonas. When you wake up, you'll be a new man."
Then, everything went black. The last chapter of his human life had ended. The next chapter -
I thought she was going to start disseminating seeds and sprouting vines in the end. This made me laugh out loud.
And in one:
print('\n'.join(['Fizz' * (i % 3 == 0) + 'Buzz' * (i % 5 == 0) or str(i) for i in range(1, 101)]))
ChatGPT does it in two:
for i in range(1, 101):
print("Fizz" * (i % 3 == 0) + "Buzz" * (i % 5 == 0) or i)
My 3-line FizzBuzz in python:
for i in range(1, 101):
x = ["", "Fizz"][i%3==0] + ["", "Buzz"][i%5==0]
print([x, i][len(x)==0])
Up front: I am biased against extreme diets like water-only fasts. I can see a use case in carefully medically supervised settings, such as for a cancer treatment, and I know that some religious practitioners use them. I've never tried them and have never been morbidly obese.
The only truly releveant paper I found that looked relevant was a case study of a woman whose 40-day water fast caused thiamine deficiency, which led to her developing a severe neurological disorder called Wernicke's encephalopathy (source).
The academic literature on prolonged water-only fasts is extremely limited, particularly for weight-loss purposes in humans. Prolonged water-only fasts have a few more studies in non-human organisms and are sometimes used as anti-cancer treatments.
I would be really careful about basing decisions on reddit anecdotes. Questions I would be wondering about:
- What nutritional deficiencies and other potentially serious health risks might result from a sustained water-only fast?
- For those who lost weight on a prolonged water fast, did they keep it off long-term?
- Of those who try a water fast, how many fail to sustain it?
- How much selection bias in favor of success stories when people post their weight-loss experiences in /r/fasting?
- How does a failed extreme dieting attempt affect your motivation to try again in the future?
The metis in the dieting world appears to be that crash diets tend to be yo-yo diets, leading either to failure of the diet attempt or to rapid weight regain after the diet ends. My picture of successful dieting tends to look more like what I'm doing:
- Eating a lower but sustainable number of calories per day (like 1200 calories for a 200-pound person), mainly by replacing oils/fats, sugars and starches with fruits and non-starch vegetables and by improving the ability to eat limited portions
- Exercising, especially resistance training, which has the best evidence for counteracting the decrease in resting metabolic rate that occurs when you start dieting
- Gaining intellectual knowledge of nutritional concepts like resting metabolic rate, caloric density, glycemic index, and how the shift from water/glycogen-burning to fat-burning affects your rate of weight loss
- Developing a typology of your hunger symptoms, what they seem to mean (i.e. my stomach isn't full vs. my blood sugar is low), whether they should motivate you to eat, and if so, what type of food and how much. For example, in my case, stomach rumbles and a feeling of emptiness seems to mean my stomach is empty, whereas wooziness, shortness of breath, and irritability seem to mean my blood sugar is low.
- Figuring out at what stage of hunger and in what circumstances you do or do not have self-control over your eating, and making sure to avoid circumstances where you lose control and impulse-eat
- Adjusting your shopping, meal-planning, and restaurant-ordering behaviors
- Getting socially connected: talking about your goals with family and friends, with a doctor or nutritionist, with an exercise coach, and being at least occasionally involved with support gorups
- Making it fun and motivating: building up a positive picture of what benefits you expect to reap and a clear-eyed understanding of where your weight and weight-linked physiological traits are now and where you're headed if you don't change course
- There are amazing reports about the new weight loss drugs like Wegovy, but I have not tried them and they are currently very expensive and hard to get
Please don't take this as medical advice - it's just a description of what I'm doing. I'd highly recommend exploring a range of dieting options and considering each on their merits before selecting one, and ideally interfacing with a medical professional to consider your specific situation.
I'll consider that! Thanks MondSemmel.
Back on my laptop, so I can quote conveniently. First, I went back and read the Tale of Alice Almost more carefully, and found I had misinterpreted it. So I will go back and edit my original comment that you were responding to.
Second, Villiam's point is that "ok but slightly worse than current group average" behavior has "potential to destroy your group" if you "keep adding people who are slightly below the average... thus lowering the average," lowering the bar indefinitely.
Villiam is referencing a mathematical truth that may or may not be empirically relevant in real group behavior. For example of a situation where it would not apply, consider a university that effectively educates its students. Every year, it onboards a new group of freshman who are below-University-average in terms of scholastic ability, and graduates its highest-performers.
Of course, we know why the quality of students at the university doesn't necessarily degrade: the population of high schoolers it recruits from each year may have relatively static quality, and the university and students both try to improve the scholastic performance of the incoming class with each year so that the quality of the senior class remains static, or even improves, over time.
In my view, a combination of private constructive criticism and public praise works very well to motivate and inform students when they are learning. Furthermore, an environment that promotes learning and psychological wellbeing is attractive to most people, and I expect that it provides benefits in terms of selecting for high-performing recruits. I had mistakenly read sarahconstantin's post as advocating for public humiliation of slightly-below-average performers in order to ice them out or motivate people to work harder, which is not what she was calling for. This is why I wrote my orginal comment in response to Raemon.
You seem to be pointing out that if we praise people (in the context of my original comment, praise slightly-below-average performers for personal improvement), then some people will incorrectly interpret us as praising these slightly-below-average people as being "good enough."
I think there is a way to steelman your claim - perhaps if a sensei systematically praises the personal-best performance of a below-group-average student, then other students will interpret the sensei as having low standards, and start bringing less-committed and less-capable friends to intro sessions, resulting in a gradual degredation of the overall quality of the students in the dojo.
But I think this is an empirical claim, not a mathematical truth. I think that an environment where participants receive praise for personal-best performance results in accelerated improvement. At first, this merely counteracts any negative side effects with recruitment. Over time, it actually flips the dynamic. The high-praise environment attains higher average performance due to accelerated improvement, and this makes it more appealing to even higher-performing recruits both because high-praise is more appealing than low-praise and because they can work with higher-skill peers. Eventually, it becomes too costly to onboard more people, and so people have to compete to get in. This may allow the group to enforce higher standards for admission, so another beneficial selection force kicks in.
This model predicts that high-praise environments tend to have higher quality than low-praise environments, and that shifting to a high-praise style will result in improved group performance over time.
You seem to think that Villiam's point "follows" from the fact that not everybody will correctly understand that praising personal-best performance doesn't mean holding that person's work up as exemplary. I don't know how strongly you mean "follows," but I hope this essay will clarify the overall view I'm trying to get across here.
Your comment is a response to my rejection of the claims in Alice Almost that a good way to improve group quality is to publicly humiliate below average performers.
Specifically, you say that praising the improvement of the lower performing members fails to stop Villiam’s proposal to stop evaporative cooling by kicking out or criticizing low performers.
So I read you and Villiam as rejecting the idea that a combination of nurture and constructive criticism is the most important way to promote high group performance, and that instead, kicking out or publicly making an example of low performers is the better way.
If that’s not what you’re saying then let me know what specifically you are advocating - I think that one of the hard parts of this thread is the implicit references to previous comments and linked posts, without any direct quotes. That’s my fault in part, because I’m writing a lot of these comments on my phone, which makes quoting difficult.
Im responding to Raemon’s link to the Tale of Alice Almost, which is what I thought you were referring as well. If you haven’t read it already, it emphasizes the idea that by holding up members of a group who are slightly below the group average as negative examples, then this can somehow motivate an improvement in the group. Your response made me think you were advocating doing this in order to ice out low-performing members. If that’s wrong, then sorry for making false assumptions - my comment can mainly be treated as a response to the Tale of Alice Almost.
The fundamental premise of trying to have a group at all is that you don’t exclusively care about group average quality. Otherwise, the easiest way to maximize that would be to kick out everybody except the best member.
So given that we care about group size as well as quality, kicking out or pushing away low performers is already looking bad. The natural place to start is by applying positive reinforcement for participating in the group, and only applying negative pressures, like holding up somebody as a bad example, when we’re really confident this is a huge win for overall group performance.
Edit:
The original version of my comment ended with:
"Humiliating slightly below group average performers seems frankly idiotic to me. Like, I’m not trying to accuse anybody here of being an idiot, I am just trying to express how intensely I disagree with the idea that this is a good way to build or improve the quality of groups. It’s like the leadership equivalent of bloodletting or something."
This was motivated by a misreading of the post Raemon linked and suggested an incorrect reading of what MY Zuo was saying. While I absolutely believe my statement here is true, it's not actually relevant to the conversation and is probably best ignored.
I think the post is describing a real problem (how to promote higher standards in a group that already has very high standards relative to the general population). I would like to see a different version framed around positive reinforcement. Constructive criticism is great, but it’s something we always need to improve, even the best of us.
People are capable of correctly interpreting the context of praise and taking away the right message. If Alice is a below-average fighter pilot, and her trainer praises her publicly for an above-average (for Alice) flight, her peers can correctly interpret that the praise is to recognize Alice’s personal growth, not to suggest that Alice is the ideal model of a fighter pilot. What inspires individual and collective improvement and striving is an empirical psychological question, and AFAIK a background of positive reinforcement along with specific constructive criticism is generally considered the way to go.
I’m not sure that “group average” is always the metric we want to improve. My intuition is that we want to think of most groups as markets, and supply and demand for various types of interaction with particular people varies from day to day. Adding more people to the market, even if they’re below average, can easily create surplus to the benefit of all and be desirable.
Obviously even in real markets it’s not always beneficial to have more entrants, I think mainly because of coordination costs as the market grows. So in my model, adding extra members to the group is typically good as long as they can pay for their own coordination costs in terms of the value they provide to the group.
Yeah, I think this is an important explanation for why (in my preferred image), we’d find the faeries hiding under the leaves in the faerie forest.
To avoid behavior that’s costly to police, or shortcomings that are hard to identify, and also to attract virtues that are hard to define, we rely in part on private, reputation- and relationship-based networks.
These types of ambiguous bad behaviors are what I had in mind when I wrote “predatory,” but of course they are not necessarily so easy to define as such. They might just be uncomfortable, or sort of “icky sticky,” or even just tedious, and not only in a sexual way. The grandstanding blowhard or tireless complainer or self-righteous moralizer also fit the bill. Maybe the word “scrubs” is a better word?
I don't think this is an adequate account of the selection effects we find in the job market. Consider:
- We don't expect people to disappear from the job market just because they're the best. They disappear from the market when they've found a match, so that they and their counterpart become confident in the relationship's durability and shift from explore to exploit, investing in the relationship for the long term. The pool of candidates is comprised both of those who outgrew their previous position or relationship, and those who got fired or dumped. Insofar as the market suffers from information asymmetries or different rates of performance growth between partners, we should expect lots of participation in the market by high-quality people seeking to move up from a previous mismatch with a low-quality partner.
- Low-quality employees get discouraged and exit the job market in their field, while low-quality businesses go bankrupt. The people who aren't dating include both those who are in relationships (which only means they're well-matched and better than no partner at all) and those who are too stressed or unhealthy or discouraged to even try to date. Participating in the market is a costly signal that you consider yourself hireable/dateable or are successful and ready to grow.
- Searching widely for the next job may be a sign of vigor and open-mindedness - the people putting out the most applications are those most determined to succeed.
One factor that is discouraging to consider is how switching costs and the cost of participation in the market intersect.
- If it's cheap to look for a partner, costly to break up, and a lot of information asymmetry, then there'll always be a set of terrible partners who are always on the hunt (because it's cheap), who have a real chance of finding a better match (because of information asymmetry), and who can expect to keep the good match around for a while (because of high switching costs). The US military is an example. It has a massive advertising budget and a huge manpower shortage, so it's always on the hunt for recruits. There's a big difference between its heroic and self-actualizing self-portrayal and the dismal and dehumanizing experience many soldiers report. And once you're in, you can't just leave. The existence of these entities in any market with these properties is a discouraging sign when considering candidate jobs/partners at random. If you find that participation in the market is cheap or that sharing negative information about a candidate is discouraged (i.e. a powerful politician who could retaliate against anyone accusing them of wrongdoing), then learning this information should make you downgrade your expectations of the candidate's quality.
That being said, it may be that seeking a job via responding to job applications online is a sign of a lower-tier candidate, all else equal. Whether a writer submits to editors independently or via an agent may say a lot about the writer's quality, and whether a first date comes from an app, a recommendation from a friend, or flirtation at a party might say a lot about the potential romantic partner.
Ah, that makes sense. Yes, I agree that carefully breaking down an argument into steps isn’t necessarily better than just letting it grow by bits and pieces. What I’m trying to emphasize is that if you can transmit an attitude of interest and openness in the topic, the classic idea of instilling passion in another person, then that solves a lot of the problem.
Underneath that, I think a big barrier to passion, interest and openness for some topic is a feeling that the topic conflicts with an identity. A Christian might perceive evolution as in conflict with their Christian identity, and it will be difficult or impossible for even the most inspiring evolutionist to instill interest in that topic without first overcoming the identity conflict. That’s what interests me.
I don’t think that identify conflict explains all failures to connect, not by a long shot. But when all the pieces are there - two smart people, talking at length, both with a lot of energy, and yet there’s a lot of rancor and no progress is made - I suspect that identify conflict perceptions are to blame.
I do think that visualizing the social world as a bright network of lively, private social connections with these relatively bland public outlets is a useful and probably neglected one. And the idea that a certain inaccessibility or privacy is key for their survival is important too. I visualize it more as a sort of faerie forest. To many, it seems like there’s nothing there. In fact there’s a whole faerie realm of private society, but you need to seek out or luck into access, and it’s not always easy to navigate and connections don’t always lead you where you expect.
I'm not quite following you - I'm struggling to see the connection between what you're saying and what I'm saying. Like, I get the following points:
- Sometimes, you need to learn a bunch of prerequisites without experiencing them as useful, as when you learn your initial vocabulary for a language or the rudimentary concepts of statistics.
- Sometimes, you can just get to a place of understanding an argument and evaluating it via patient, step-by-step evaluation of its claims.
- Sometimes, you have to separate understanding the argument from evaluating it.
The part that confuses me is the third paragraph, first sentence, where you use the word "it" a lot and I can't quite tell what "it" is referring to.
That makes some sense to me. The most salient feature of the Dark Forest scenario, to me, is the one in which we're in a bad prisoner's game dilemma with something like the following payoff matrix:
- Cooperate/cooperate gets some finite positive utility for both players
- Defect/cooperate means ruin for the cooperator with equal or greater utility for the defector
- Defect/defect ruins one player with 50% probability
Of course, real-world decisions to participate in private events don't resemble this payoff matrix, which is why, for me, the Dark Forest scenario feels somehow too dramatic, or like it's too paranoid an account of the feeling of anxiety or boredom that comes with trying to connect with people in a new social setting full of strangers.
Or maybe I'd take that further and say that newcomers at parties often seem to operate as if they were in a Dark Forest scenario ("careful not to embarrass yourself, people will laugh at you if you say something dumb, probably everybody's mean here, or else annoying and foolish which is why they're at this public gathering, your feelings of anxiety and alienation are warranted!"). And it's much better if you realize that's not in fact the case. People there are like you - wanting to connect, pretty interesting, kind of anxious, just waiting for someone else to make the first move. There are all kinds of perfectly normal reasons people are choosing to hang out at a public gathering place rather than with their close friends, and if public gatherings seem "terrible," it's usually because of the instrinsic awkwardness of strangers trying to break the ice. They'd almost all be interesting and fun if they got to know each other and felt more comfortable.
But I do see the connection with the desire to avoid unpleasant strangers and the need to infer the existence of all these private get-togethers and communities.
Let's say I'm right, and a key barrier to changing minds is the perception that listening and carefully considering the other person's point of view amounts to an identity threat.
- An interest in evolution might threaten a Christian's identity.
- Listening to pro-vaccine arguments might threaten a conservative farmer's identity.
- Worrying about speculative AI x-risks might threaten an AI capability researcher's identity.
I would go further and claim that open-minded consideration of suggestions that rationalists ought to get more comfortable with symmetric weapons, like this one, might threaten a rationalist's identity. As would considering the idea that one has an identity as a rationalist that could be threatened by open-minded consideration of certain ideas!
If I am correct, then it would be important not to persuade others that this claim is correct (that's the job of honest, facts-and-logic argument), but to avoid a reflexive, identity-driven refusal to consider a facts-and-logic argument. I've already lost that opportunity in this post, which wasn't written with the goal of pre-empting such a reflexive dismissal. But in future posts, how might one do such a thing?
I don't think it's as simple as saying something explicit like "you can be a rationalist and still consider X," or "if you were a good rationalist, you should consider X with an open mind." Such statements feel like identity threats, and it's exactly that perception that we're trying to avoid!
I also don't think you just make the argument, relying on the rationalist form of the argument or your in-group status to avoid the identity threat. A fundamentalist preacher who starts sermonizing on Sunday morning about the truth of evolution is not going to be treated with much more receptivity by the congregation than they'd demonstrate if teleported into a Richard Dawkins lecture.
Instead, I think you have to offer up a convincing portrait of how it is that a dyed-in-the-wool, passionate rationalist might come to seriously consider outgroup ideas as an expression of rationalism. Scott Alexander, more than any other rationalist writer I've seen, does this extremely well. When I read his best work, and even his average work, I usually come away convinced that he really made an effort to understand the other side, not just by his in-group standards, but by the standards of the other side, before he made a judgment about what was true or not. That doesn't necessarily mean that Scott will be convincing to the people he's disagreeing with (indeed, he often does not persuade them), but it does mean that he can be convincing to rationalists, because he seems to be exemplifying how one can be a rationalist while deeply considering perspectives that are not the consensus within the rationalist community.
Eliezer does something really different. He seems to try and assemble an argument for his own point of view so thorough and with so many layers of meta-correctness that it seems as if there's simply noplace that a truthseeker could possibly arrive at except the conclusion that Eliezer himself has arrived at. Eliezer has explanations for how his opponents have arrived at error, and why it makes sense to them, but it's almost always presented as an unfortunate error that results from avoidable human cognitive biases that he himself has overcome to a greater degree than his opponents. This is often useful, but it doesn't exemplify how a rationalist can deeply consider points of view that aren't associated with the rationalist identity while preserving their identity as a rationalist intact. Indeed, Eliezer often comes across as if disagreeing with him might threaten your identity as a rationalist!
There are a lot of other valuable writers in the LessWrong world, but their output usually strikes me as very much "by rationalists, for rationalists."
To me, you seem to be describing a pretty ideal version of consciously practiced rationality - it's a good way to be or debate among those in scout mindset. That's useful indeed!
I am interested here mainly in how to better interface with people who participate in debate, and who may hold a lot of formal or informal power, but who do not subscribe to rationalist culture. People who don't believe, for whatever reason, in the idea that you can and should learn ideas thoroughly before judging them. Those who keep their identities large and opt to stay in soldier mindset, even if they wouldn't agree with Paul Graham or Julia Galef's framings of those terms or wouldn't agree such descriptors apply to them.
I think the important part is training a good simulation of a new worldview, not shifting weight to it or modifying an old worldview. To change your mind, you first need availability of something to change your mind to.
Do you mean that preserving your openness to new ideas is about being able to first try on new perspectives without necessarily adopting them as the truth? If so, I agree, and I think that captures another oft-neglected aspect of debate. We tend to lump together an explanation of what our worldview is, with a claim that our worldview is true.
When all participants in the debate view opportunities to debate the topic in question as rare and consequential, all the focus goes into fighting over some sort of perception of victory, rather than on trying to patiently understand the other person's point of view. Usually, that requires allowing the other person to adopt, at least for a while, the perceived role of the expert or leader, and there's always a good chance they'll refuse to switch places with you and try and learn from you as well.
That said, I do think that there are often real asymmetries in the level of expertise that go unrecognized in debate, perhaps for Dunning Krueger reasons. Experts shouldn't demand deference to their authority, and I don't think that strategy works very well. Nevertheless, it's important for experts to be able to use their expertise effectively in order to spread knowledge and the better decision-making that rests on it.
My take is that this requires experts to understand the identities and formative memories that underpin the incorrect beliefs of the other person, and conduct their discussion in such a way as to help the other person see how they can accept the expert's knowledge while preserving their identity intact. Sometimes, that will not be possible. An Atheist probably can't convince a Christian that there's a way to keep their Christian identity intact while disbelieving in God.
Other times, it might be. Maybe an anti-vax person sees themselves as a defender of personal freedom, a skeptic, a person who questions authority, in harmony with nature, or protective of their children's wellbeing.
We might guess that being protective of their children's wellbeing isn't the central issue, because both the pro- and anti-vax side are striving hard to reinforce that identity. Skepticism probably isn't the main motive either, since there's lots to be skeptical of in the anti-vax world.
But defending personal freedom, questioning authority, and being in harmony with nature seem to me to be identities more in tune with being anti-vax than pro-vax. I imagine large billboards saying "The COVID-19 vaccine works, and it's still OK if you don't get it" might be a small step toward addressing the personal freedom/question authority identity. And if we'd framed COVID-19 as a possible lab-generated superbug, with the mRNA vaccine harnessing your body's natural infection-fighting response rather than being an example of big pharma at its novel and high-tech best, we might have done a better job of appealing to the 'in harmony with nature' identity.
Hi Raemon! I found the comparison between the Dark Forest explanation for the Fermi Paradox and the more prosaic examples of group houses and meetups thought-provoking. Do you see the comparison as more of a loose analogy, or are they both examples of a single phenomenon? Or is the dissolution of human communities done to avoid them turning into a Dark Forest, or perhaps as a result to the first signs that they might be turning into one?
My own take is that group houses and meetups sometimes have a flavor of Dark Forest to them, when there's one or more predatory people whose uncomfortable attentions everybody is trying to avoid. I have often seen this happen with men competing for the romantic attention of young women in these settings. The women aren't necessarily trying to hide in the shadows, and the men aren't typically trying to destroy a potential rival as win the woman's attention, but the women do seem to have to figure out ways to avoid unwanted attention from these men. But that only seems superficially related to the Dark Forest explanation.
In my experience, group houses and meetups mostly break up naturally for pretty prosaic reasons: organizers get tired of running them, people build friendship and relationship networks that become more appealing than even a good meetup, people move away, the rent goes up, people's priorities change, a funder pulls out. The stable ones persist not so much because they're avoiding the attention of aggressive rivals as because they feel their time is best spent on each other, rather than in cultivating new relationships with outsiders.
However, I haven't spent time around the SF rationalist community - mainly around Seattle alternative/arts communities back in my 20s. Μaybe the dynamics are different?
Making Beliefs Identity-Compatible
When we view our minds through the lens of large language models (LLMs), with their static memory prompts and mutable context window, we find a fascinating model of belief and identity formation. Picture this in the context of a debate between an atheist and a creationist: how can this LLM-like model explain the hurdles in finding common ground?
Firstly, we must acknowledge our belief systems, much like an LLM, are slow to change. Guided by a lifetime of self-reinforcing experiences, our convictions, whether atheistic or creationist, develop a kind of inertia. They become resistant to sudden shifts, even in the face of a compelling argument.
Secondly, our beliefs generate self-protective outputs in our mind's context window. These outputs act as a defense mechanism, safeguarding our core beliefs and often leading to reactionary responses instead of open-minded dialogue.
To break through these barriers, we need to engage the mutable part of our cognitive context. By introducing new social connections and experiences, we can gently nudge the other person to add to or reinterpret their static memory prompts. This might mean introducing your creationist friend to scientifically open-minded Christians, or even to promoters of intelligent design. Perhaps you can encourage them to study evolution not as "truth," but as an alternative an interesting point of view, in much the same way that an atheist might take a passionate interest in world religions or an economist might work to master the point of view of a contradictory school of economic thought.
In the heat of a debate, however, this is rarely implemented. Instead, atheists and creationists alike tend to grapple with their fundamental disagreements, neglecting to demonstrate how an open-minded consideration of each other's viewpoint can be reconciled with their existing identity.
To make headway in such debates, it's not enough to present our viewpoint compellingly. We must also illustrate how this viewpoint can be entertained without upending the essential aspects of the other person's identity. Only by doing this can we truly bridge the ideological divide and foster a richer, more empathetic dialogue, worthy of our complex cognitive architecture.
It may not have saved, it still reads the same way to me (on a different device, so it's not just cached or anything like that).