Posts

Tim Dillon's fake business is the most influential video I have watched in the last 24 months 2024-07-22T12:54:43.749Z
Chronic perfectionism through the eyes of school reports 2024-06-19T17:46:02.229Z
Escaping Skeuomorphism 2023-12-20T03:51:00.489Z
Double-negation as framing 2023-04-16T06:59:25.189Z

Comments

Comment by Stuart Johnson (stuart-johnson) on Chronic perfectionism through the eyes of school reports · 2024-06-20T09:02:27.044Z · LW · GW

Wow that was a fascinating read, thank you for linking that. Most interesting to me was the separation of self-perfectionism from social perfectionism as a clinical concern. I've never felt social perfectionism, and ironically almost all of the trouble I got myself into as a child was from actively rebelling against social expectations. I'm glad to hear that this is also considered different in the literature.

Comment by Stuart Johnson (stuart-johnson) on What's with all the bans recently? · 2024-04-05T14:31:34.718Z · LW · GW

I don't really know, the best I can offer is sort of vaguely gesturing at LessWrong's moderation vector and pointing in a direction.

LW's rules go for a very soft, very subjective approach to definitions and rule enforcement. In essence, anything the moderators feel is against the LW ethos is against the rules here. That's the right approach to take in an environment where the biggest threat to good content is bad content. Hacker News also takes this approach and it works well - it keeps HN protected against non-hackers.

ChangeMyView is somewhat under threat of bad content - if too many people post on a soapbox, then productive commenters will lose hope and leave the subreddit. However it's also under threat of loss of buy-in - people with non-mainstream views, or those that would be likely to attract backlash elsewhere need to feel that the space is safe for them to explore.

When optimising for buy-in, strictness and clarity is desirable. We had roughly consistent standards in terms of numbers of violations, to earn a ban, and consistently escalating bans (3 days, 30 days, permanent) in line with behavioural infractions. When there were issues, buy-in seemed present that we were at least consistent (even if the things we were consistent to weren't optimal). That consistency provided a plausible alternative to the motive uncertainty created by subjective enforcement - for example, the admins told us we were fine to continue hosting discussions regarding gender and race that were being cracked down on elsewhere on Reddit. 

Right now, I think LW is doing a good job of defending against bad content. I think what would make LW stronger is a semi-constitutional backbone to fall against in times of unrest. Kind of like how the 5th pillar of Wikipedia is to ignore all rules, yet policy is still the essential basis of editing discussions.

I would like to see, in the case of commenting guidelines, clearer definitions of what excess looks like. I think the subjective approach is fine for posts for now.

Comment by Stuart Johnson (stuart-johnson) on What's with all the bans recently? · 2024-04-04T13:15:40.070Z · LW · GW

I spent several years moderating r/changemyview on Reddit which also has this rule. Having removed at least hundreds of comments that break it, I think the worst thing about it is that it rewards aloofness and punishes sincerity. That's acceptable to trade off to prevent the rise of very sincere flame wars, but it elevates people pretending to be wise at the expense of those with more experience who likely have more deeply held but also informed opinions about the subject matter. This was easily the most common moderation frustration expressed by users.

Comment by Stuart Johnson (stuart-johnson) on AI-generated opioids are a catastrophic risk · 2024-03-20T18:00:24.597Z · LW · GW

You convince me of outcome, but not of comparative capacity:

  1. Drug addictivity has upper limit - the % of people who take it once that become addicted to it, and the % of people who successfully quit. It caps at 100% and 0% respectively. Fentanyl probably isn't too far off that cap.
  2. Without AI, more addictive opioids than fentanyl will probably be discovered at some point. How much higher is the capacity for creating addictiveness?
Comment by Stuart Johnson (stuart-johnson) on Ten Modes of Culture War Discourse · 2024-02-02T15:58:20.483Z · LW · GW

I think the important value here is not the assets changing hands as part of the exchange, but rather the value each party stands to gain from the exchange. Both parties are aligned that shaking hands on the current terms is acceptable to them, but they will both lie about that fact if they think it helps them move towards C or D. 

Or to put it another way, in your frame I don't think any kind of collaboration can ever be in anyone's interests unless you are aligned in Every Single Thing.

If I save a drowning person, in a mercenary way it is preferable to them that I not only save them but also give them my wallet. Therefore my saving them was not a product of aligned interests (desire to not drown + desire to help others) since the poor fellow must now continue to pay off his credit card debt when his preference is to not do that.

For me, B > A > D > C, and for the drowning man,  A > B > C > D (Here A = rescue + give wallet, B = rescue, no wallet, C = no rescue, throw wallet into water, D = walk away)

What matters in the drowning relationship (and the reason for our alignment) is B > C. Whether or not I give him my wallet is an independent variable from whether I save him and the resulting alignment should be considered separately.

In your example, I'm focusing on the alignment of A and B. Both parties will be dishonest about their views on A and B if they think it gets them closer to alignment on C and D. That's the insincerity.

Comment by Stuart Johnson (stuart-johnson) on Wrong answer bias · 2024-02-02T01:03:43.547Z · LW · GW

I feel like this post just slapped me in the face violently with a wet fish. I'm still reeling from the impact and trying to figure out how I feel about it.

Comment by Stuart Johnson (stuart-johnson) on Ten Modes of Culture War Discourse · 2024-02-02T00:39:57.001Z · LW · GW

I think it has a lot more to do with status quo preservation than truthseeking. If I'm Martha Corey living in Salem, I'm obviously not going to support the continued investigations into the witching activities of my neighbours and husband, and the last reason for that being the case is fear of the exposed truth that I've been casting hexes on the townsfolk all this time.

I think a much simpler explanation is that continued debate increases the chances I'm put on trial, and I'd much rather have the status quo of not debating whether I'm a witch preserved. If it were a social norm in Salem to run annual witching audits on the townsfolk, perhaps I'd support debate for not doing that any more. The witch hunting guild might point a kafkaesque finger at me in return because they'd much rather keep up the audits.

Up stands Elizabeth Hubbard who calmly explains that if no wrongdoing has taken place then no negative consequences will occur, and that she is concerned by the lack of clarity and accountability displayed by those who would shut down such discussions before they've even begun.

In your example, what makes Alice (Elizabeth) the guru and Bob (Martha) the siren? 

Comment by Stuart Johnson (stuart-johnson) on Ten Modes of Culture War Discourse · 2024-02-02T00:16:10.803Z · LW · GW

In almost all cases, the buyer will grossly exaggerate the degree to which values are not aligned in the hopes of driving the seller down in price. In most cases, the buyer has voluntarily engaged the seller (or even if they haven't, if they consider the deal worth negotiating then there must be some alignment of values). 

Even if I think the price is already acceptable to me, I will still haggle insincerely because of the prospect of an even better deal.

Comment by Stuart Johnson (stuart-johnson) on Ten Modes of Culture War Discourse · 2024-01-31T18:05:57.987Z · LW · GW

Great post, I enjoyed it.

  1. Regarding the insincere friendship - insincere enmity relationship, I think a very simple example of this I see all the time is a negotiating relationship between a seller and a buyer. The seller insincerely states that he thinks their values are aligned and for that reason it's in your interests to buy, and the buyer insincerely states that he doesn't think values are aligned (even if they are) because they want a lower price.
  2. Regarding free speech, I think there's a missed complication in how the relationship plays out between conflict theorists. For example, many conservatives (and especially the pro-conflict, culture war conservatives) believe very strongly in the importance of free speech, and not just because they want to maintain their permit to troll. If words are an effective arena of battle for your group then you tend to be in favour of free speech, and if they've historically been used against your group then you tend to be against it.
Comment by Stuart Johnson (stuart-johnson) on A Challenge to Effective Altruism's Premises · 2024-01-06T20:19:59.586Z · LW · GW

Commenting before voting as requested.

After reading this several times, I think the point being made here can broadly be summed up as:

Capitalism is bad because it relies on self-interest (why?), and the size of the bad is measured by the amount of people involved in it (why?). Helping people means they're more likely to both reproduce and be grateful to capitalism in a way that makes them want to preserve the status quo, ergo we ought not help people, because if we do, we will create more capitalist sycophants. 

If I've misunderstood you, then it's because you aren't writing simply

If I haven't misunderstood you, then I find the lack of a suggested alternative irreparably damaging to the claim made. 

Comment by Stuart Johnson (stuart-johnson) on The Next ChatGPT Moment: AI Avatars · 2024-01-06T13:54:31.448Z · LW · GW

My bet is that conversational agents get buy-in in the early days because of Skeuomorphism, but eventually are phased out in favour of more efficient interaction styles.

Comment by Stuart Johnson (stuart-johnson) on Defense Against The Dark Arts: An Introduction · 2023-12-26T00:46:38.830Z · LW · GW

I think most of the best posts on this website about the dark arts are deep analyses of one particular rhetorical trick and the effect it has on a discussion. For example, Setting the Zero Point or The noncentral fallacy - the worst argument in the world? are both discussions about hypothesis privilege that rely on unstated premises. I think reading these made me earnestly better at recognising and responding to Dark Arts in the real world. Frame Control and its response, Tabooing "Frame Control" are also excellent reads in my opinion.

Comment by Stuart Johnson (stuart-johnson) on Defense Against The Dark Arts: An Introduction · 2023-12-25T22:45:09.835Z · LW · GW

Breaking my usual lurking habit to explain my downvote. I travel around a lot and compete in various debating competitions, so this topic is close to my heart. I read this as an attempt to raise the epistemic water level.

It is acknowledged but I still find that this post veers wildly off-topic about half way through and extraneously bashes Ramaswarmy in a way I'm not sure is constructive.

The 2nd points harks on something valid which also irks me, but I think Scott beat you to the punch. Even that given though, I don't think any of these things as given are particularly potent defences against the dark arts as put - either in debates or in life. I think unwillingness, apathy, or lack of capacity is a much bigger barrier to further academic reading than recognition that subject matter experts are more accurate than random Youtube punters. 

I wrote a lot more here, but I'm deleting it to instead say that this post lacks focus and breadth - I think it is simultaneously too shallow in the advice that is given (read primary sources, be educated, don't experience the Dunning Kruger effect) but also too specific and mindkilling in the examples it chooses (a long explanation of why Ramaswarmy is Super Wrong about this one thing he said) to be pedagogic.

Comment by Stuart Johnson (stuart-johnson) on Double-negation as framing · 2023-04-17T03:08:14.617Z · LW · GW

Perhaps poorly phrased. I was trying to hint at Isolated Demands For Rigor to skeptically dismiss all evidence. This post was inspired by someone I used to work with who amongst other things would talk about "fundamental issues" and "the big picture" in a vague way as rhetorical devices to discard certain pieces of evidence.

Comment by Stuart Johnson (stuart-johnson) on Double-negation as framing · 2023-04-16T14:59:41.796Z · LW · GW

I guess an ending where I throw my hands up and say "oh no my reasoning" was simultaneously the most likely and the most beneficial outcome to finally wading in to throw up a post of my own. Critique is fair enough, and it would seem that least to some degree I have in fact missed the point.

I still think there's something here beyond just privileging a hypothesis and Orwell's complaint about double negation as euphemism. Perhaps the real thrust I was trying to make here was that double negation makes it harder to notice that you've privileged a hypothesis. Socratic questioning is good but tends to bore an audience, takes a long time, and doesn't lead to the kind of decisive rhetorical victory you need to win a manoeuvring competition. There might be something in rephrasing socratic questions as propositions instead, but I'm not currently sure what that would look like.

There's a wealth of valid insight amongst the rationalism community, but it goes unusable if you can't win the frame in the first place. It's not sufficient to be right in many contexts, you must also be rhetorically persuasive. I've not yet come across a convincing framework for melding the two.

Comment by Stuart Johnson (stuart-johnson) on Double-negation as framing · 2023-04-16T13:16:10.574Z · LW · GW

I am always impressed by how much insight LW users can cram into a small number of words. One angle I feel has been underdiscussed on LW is effective rhetorical devices for dealing with people who are very good at using the dark arts. This post was inspired by my experience with an old colleague with whom we many times had the exact conversation in the green-purple example. 

I somehow missed Setting the Zero Point, and it's extremely thorough, but I wish it were more like Proving Too Much - advice on how to convince an audience that rationality is valuable.

Well, replacing "X" with "not anti-X" makes it a weaker statement, and it's quicker, easier, and less risky to make a weaker statement when you're writing in a political or corporate environment.

This is the opposite conclusion to the one I reached - that positive values are evaluated on balance, while negative values are evaluated by their exclusivity. I think we're talking about subtly different phenomena though, I'm not considering euphemism here, just scaling the rigidity. I do agree that self-deceit is an important part of framing conflicts though. It might be worth whole new post, but I theorize that mental resistance to using rhetorical dark arts is strongly associated with openness to experience and one's personal relationship with doubt and learning.

Do you know of any particularly good essays that focus on countering the dark arts performatively for an audience beyond just being aware of them?

I've spent several years competing in university debating and I've learned a lot about practical application of a very specific kind of dark arts, but interpersonal dark arts are a different sort I want to learn more about.