Posts

GWS's Shortform 2022-02-14T17:37:45.045Z

Comments

Comment by Stephen Bennett (GWS) on My PhD thesis: Algorithmic Bayesian Epistemology · 2024-03-28T07:07:07.815Z · LW · GW

Congratulations! I wish we could have collaborated while I was in school, but I don't think we were researching at the same time. I haven't read your actual papers, so feel free to answer "you should check out the paper" to my comments.

For chapter 4: From the high level summary here it sounds like you're offloading the task of aggregation to the forecasters themselves. It's odd to me that you're describing this as arbitrage. Also, I have frequently seen the scoring rule be used with some intermediary function to determine monetary rewards. For example, when I worked with IARPA on geopolitical forecasting, our forecasters would get financial rewards depending on what percentile they were in relative to other forecasters. One would imagine that this would eliminate the incentive to report the aggregate as your own answer, but there's a reason we (the researcher/platform/website) aggregate individual forecasts! It's actually just more accurate under typical conditions. In theory an individual forecaster could improve that aggregate by forming their own independent forecast before seeing the work of others, and then aggregating, but in practice the impact of an individual forecast is quite small. I'll have to read about QA pooling, it's surprising to me that you could disincentivize forecasters from reporting the aggregate as their individual forecast.

For chapter 7: It seems to me that under sufficiently pessimistic conditions, there would be no good way to aggregate those two forecasts. For example, if Alice and Bob are forecasting "Will AI cause human extinction in the next 100 years?", they both might individually forecast ~0% for different reasons. Alice believes it is impossible for AI to get powerful enough to cause human extinction, but if it were capable of acting it would kill us all. Bob believes any agent smart enough to be that powerful would necessarily be morally upstanding and believes it's extremely likely that it will be built. Any reasonable aggregation strategy will put the aggregate at ~0% because each individual forecast is ~0%, but if they were to communicate with one another they would likely arrive at a much higher number. I suspect that you address this in the assumptions of the model in the actual paper.

Congrats again, I enjoyed your high level summary and might come back for a more detailed read of your papers.

Comment by Stephen Bennett (GWS) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-28T05:43:52.618Z · LW · GW

What do you think Metz did that was unethical here?

Comment by Stephen Bennett (GWS) on If you weren't such an idiot... · 2024-03-06T03:32:05.522Z · LW · GW

Soft downvoted for encouraging self-talk that I think will be harmful for most of the people here. Some people might be able to jest at themselves well, but I suspect most will have their self image slightly negatively affected by thinking of themselves as an idiot.

Most of the individual things you recommend considering are indeed worth considering.

Comment by Stephen Bennett (GWS) on Approaching Human-Level Forecasting with Language Models · 2024-03-01T19:24:59.505Z · LW · GW

Interesting work, congrats on achieving human-ish performance!


I expect your model would look relatively better under other proper scoring rules. For example, logarithmic scoring would punish the human crowd for giving >1% probabilities to events that even sometimes happen. Under the Brier score, the worst possible score is either a 1 or a 2 depending on how it's formulated (from skimming your paper, it looks like 1 to me). Under a logarithmic score, such forecasts would be severely punished. I don't think this is something you should lead with, since Brier scores are the more common scoring rule in the literature, but it seems like an easy win and would highlight the possible benefits of the model's relatively conservative forecasting.


I'm curious how a more sophisticated human-machine hybrid would perform with these much stronger machine models, I expect quite well. I did some research with human-machine hybrids before and found modest improvements from incorporating machine forecasts (e.g. chapter 5, section 5.2.4 of my dissertation Metacognitively Wise Crowds & the sections "Using machine models for scalable forecasting" and "Aggregate performance" in Hybrid forecasting of geopolitical events.), but the machine models we were using were very weak on their own (depending on how I analyzed things, they were outperformed by guessing). In "System Complements the Crowd", you aggregate a linear average of the full aggregate of the crowd and the machine model, but we found that treating the machine as an exceptionally skilled forecaster resulted in the best performance of the overall system. As a result of this method, the machine forecast would be down-weighted in the aggregate as more humans forecasted on the question, which we found helped performance. You would need access to the individuated data of the forecasting platform to do this, however.


If you're looking for additional useful plots, you could look at Human Forecast (probability) vs AI Forecast (probability) on a question-by-question basis and get a sense of how the humans and AI agree and disagree. For example, is the better performance of the LM forecasts due to disagreeing about direction, or mostly due to marginally better calibration? This would be harder to plot for multinomial questions, although there you could plot the probability assigned to the correct response option as long as the question isn't ordinal.

I see that you only answered Binary questions and that you split multinomial questions. How did you do this? I suspect you did this by rephrasing questions of the form "What will $person do on $date, A, B, C, D, E, or F?" into "Will $person do A on $date?", "Will $person do B on $date?", and so on. This will result in a lot of very low probability forecasts, since it's likely that only A or B occurs, especially closer to the resolution date. Also, does your system obey the Law of total probability (i.e. does it assign exactly 100% probability to the union of A, B, C, D, E, and F)? This might be a way to improve performance of the system and coax your model into giving extreme forecasts that are grounded in reality (simply normalizing across the different splits of the multinomial question here would probably work pretty well).

Why do human and LM forecasts differ? You plot calibration, and the human and LM forecasts are both well calibrated for the most part, but with your focus on system performance I'm left wondering what caused the human and LM forecasts to differ in accuracy. You claim that it's because of a lack of extremization on the part of the LM forecast (i.e. that it gives too many 30-70% forecasts, while humans give more extreme forecasts), but is that an issue of calibration? You seemed to say that it isn't, but then the problem isn't that the model is outputting the wrong forecast given what it knows (i.e. that it "hedge[s] predictions due to its safety training"), but rather that it is giving its best account of the probability given what it knows. The problem with e.g. the McCarthy question (example output #1) seems to me that the system does not understand the passage of time, and so it has no sense that because it has information from November 30th and it's being asked a question about what happens on November 30th, it can answer with confidence. This is a failure in reasoning, not calibration, IMO. It's possible I'm misunderstanding what cutoff is being used for example output #1.


Miscellaneous question: In equation 1, is k 0-indexed or 1-indexed?

Comment by Stephen Bennett (GWS) on I don’t find the lie detection results that surprising (by an author of the paper) · 2023-10-06T06:51:50.720Z · LW · GW

The second thing that I find surprising is that a lie detector based on ambiguous elicitation questions works. Again, this is not something I would have predicted before doing the experiments, but it doesn’t seem outrageous, either.

I think we can broadly put our ambiguous questions into 4 categories (although it would be easy to find more questions from more categories):

 

Somewhat interestingly, humans who answer nonsensical questions (rather than skipping them) generally do worse at tasks: pdf. There's some other citations in there of nonsensical/impossible questions if you're interested ("A number of previous studies have utilized impossible questions...").

It seems plausible to me that this is a trend in human writing more broadly and that the LLM picked up on. Specifically, answering something with a false answer is associated with a bunch of stuff - one of those things is deceit, one of those things is mimicking the behavior of someone who doesn't know the answer to things or doesn't care about the instructions given to them. So, since that behavior exists in human writing in general, the LLM picks it up and exhibits it in its writing.

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-04T15:50:19.682Z · LW · GW

See this comment.

You edited your parent comment significantly in such a way that my response no longer makes sense. In particular, you had said that Elizabeth summarizing this comment thread as someone else being misleading was itself misleading.

In my opinion, editing your own content in this way without indicating that this is what you have done is dishonest and a breach of internet etiquette. If you wanted to do this in a more appropriate way, you might say something like "Whoops, I meant X. I'll edit the parent comment to say so." and then edit the parent comment to say X and include some disclaimer like "Edited to address Y"


Okay, onto your actual comment. That link does indicate that you have read Elizabeth's comment, although I remain confused about why your unedited parent comment expressed disbelief about Elizabeth's summary of that thread as claiming that someone else was misleading.

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-04T06:51:35.895Z · LW · GW

I took Tristan to be using "sustainability" in the sense of "lessened environmental impact", not "requiring little willpower"

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-04T06:15:09.318Z · LW · GW

The section "Frame control" does not link to the conversation you had with wilkox, but I believe you intended for there to be one (you encourage readers to read the exchange). The link is here: https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one?commentId=uh8w6JeLAfuZF2sxQ

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-04T04:54:37.403Z · LW · GW

In the comment thread you linked, Elizabeth stated outright what she found misleading: https://forum.effectivealtruism.org/posts/3Lv4NyFm2aohRKJCH/change-my-mind-veganism-entails-trade-offs-and-health-is-one?commentId=mYwzeJijWdzZw2aAg

Getting the paper author on EAF did seem like an unreasonable stroke of good luck.

I wrote out my full thoughts here, before I saw your response, but the above captures a lot of it. The data in the paper is very different than what you described. I think it was especially misleading to give all the caveats you did without mentioning that pescetarianism tied with veganism in men, and surpassed it for women.

I expect people to read the threads that they are linking to if they are claiming someone is misguided, and I do not think that you did that.

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-03T16:04:41.307Z · LW · GW

I don't think that's the central question here.

So far as I can tell, the central question Elizabeth has been trying to answer is "Do the people who convert to veganism because they get involved in EA have systemic health problems?" Those health problems might be easily solvable with supplementation (Great!), systemic to having a fully vegan diet but only requires some modest amount of animal product, or something more complicated. She has several self-reported people coming to her saying they tried veganism, had health problems, and stopped. So, "At what rate do vegans desist for health reasons?" seems like an important question to me. It will tell you at least some of what you are missing when surveying current vegans only.

Analogously, a survey of healing crystal buyers doesn't reliably tell us whether healing crystals improve health. Even if such a survey is useful for explaining motives, it's clearly less valuable than an RCT when it comes to the important question of whether they actually work.

I agree that if your prior probability of something being true is near 0, you need very strong evidence to update. Was your prior probability that someone would desist from the vegan diet for health reasons actually that low? If not, why is the crystal healing metaphor analogous?

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-03T05:47:53.210Z · LW · GW

I'm aware that people have written scientific papers that include the word vegan in the text, including the people at Cochrane. I'm confused why you thought that would be helpful. Does a study that relates health outcomes in vegans with vegan desistance exist, such that we can actually answer the question "At what rate do vegans desist for health reasons?"

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-03T04:30:14.202Z · LW · GW

Does such a study exist?

From what I remember of Elizabeth's posts on the subject, her opinion is the literature surrounding this topic is abysmal. To resolve the question of why some veg*ns desist, we would need one that records objective clinical outcomes of health and veg*n/non-veg*n diet compliance. What I recall from Elizabeth's posts was that no study even approaches this bar, and so she used other less reliable metrics.

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-10-03T01:15:43.916Z · LW · GW

I took your original comment to be saying "self-report is of limited value", so I'm surprised that you're confused by Elizabeth's response. In your second comment, you seem to be treating your initial comment to have said something closer to "self-report is so low value that it should not materially alter your beliefs." Those seem like very different statements to me.

Comment by Stephen Bennett (GWS) on Open Thread – Autumn 2023 · 2023-10-01T19:14:59.568Z · LW · GW

Thanks!

If you're taking UI recommendations, I'd have been more decisive with my change if it said it was a one-time change.

Comment by Stephen Bennett (GWS) on Open Thread – Autumn 2023 · 2023-10-01T17:50:33.571Z · LW · GW

Could I get rid of the (Previously GWS) in my username? I changed my name from GWS to this, and planned on changing it to just Stephen Bennett after a while, then as far as I can tell you removed the ability to edit your own username.

Comment by Stephen Bennett (GWS) on Open Thread – Autumn 2023 · 2023-10-01T17:49:31.050Z · LW · GW

Obviously one trial isn’t conclusive, but I’m giving up on the water pick. Next step: test flossing.

Did you follow through on the flossing experiment?

Comment by Stephen Bennett (GWS) on Fifty Flips · 2023-10-01T16:34:41.932Z · LW · GW

The coin does not have a fixed probability on each flip.

Boy howdy was I having trouble with spoiler text on markdown.

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-30T22:25:31.790Z · LW · GW

I didn't provide quotes from my text when the mismatch was obvious enough from any read/skim of the text.

It was not obvious to me, although that's largely because after reading what you've written I had difficulty understanding what your position was at all precisely. It also definitely wasn't obvious to jimrandomh, who wrote that Elizabeth's summary of your position is accurate. It might be obvious to you, but as written this is a factual statement about the world that is demonstrably false.

My proposal is not suppressing public discussion of plant-based nutrition, but constructing some more holistic approach whose shape isn't solely focused on plant-based diets, or whose tone and framing aren't like this one (more in my text).

I'm confused. You say that you don't want to suppress public discussion of plant-based nutrition, but also that you do want to suppress Elizabeth's work. I don't know how we could get something that matches Elizabeth's level of rigor, accomplishes your goal of a holistic approach, and doesn't require at least 3 times the work from the author to investigate all other comparable diets to ensure that veganism isn't singled out. Simplicity is a virtue in this community!

I don't think it's true private communications "prevent us from getting the information" in important ways (even if taking into account the social dynamics dimension of things will always, of course, be a further hindrance). And also, I don't think public communications give us some of the most important information.

This sounds, to me, like you are arguing against public discussions. Then in the next sentence you say you're not suppressing public discussions. Those are in fact very slightly different things since arguing that something isn't the best mode of communication is distinct from promoting suppression of that thing, but this seems like a really small deal. You might ask Elizabeth something like "hey, could you change 'promotes the suppression of x' with 'argues strongly that x shouldn't happen'? It would match my beliefs more precisely." This seems nitpicky to me, but if it's important to you it seems like the sort of thing Elizabeth Elizabeth might go for. It also wouldn't involve asking her to either delete a bunch of her work or make another guess at what you actually mean.

In any event, I will stop engaging now.

Completely reasonable, don't feel compelled to respond.

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T20:45:01.363Z · LW · GW

Audience

If you’re entirely uninvolved in effective altruism you can skip this, it’s inside baseball and there’s a lot of context I don’t get into.

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T19:58:02.229Z · LW · GW

Oh whoops, I misunderstood the UI. I saw your name under the confusion tag and thought it was a positive vote. I didn't realize it listed emote-downvotes in red.

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T18:30:56.633Z · LW · GW

Since I'm getting a fair number of confused reactions, I'll add some probably-needed context:

Some of Elizabeth's frustration with the EA Vegan discourse seems to stem from general commenting norms of lesswrong (and, relatedly, the EA forums). Specifically, the frustrations remind me of those of Duncan Sabien, who left lesswrong in part because he believed there was an asymmetry between commenters and posters wherein the commenters were allowed to take pot-shots at the main post, misrepresent the main post, and put forth claims they don't really endorse that would take hours to deconstruct.

In the best case, this resulted in a discussion that exposed and resolved a real disagreement. In the worst case, this resulted in an asymmetric amount of time between main poster and commenter resolving a non-disagreement that would never have happened if the commenter put in the time to carefully read the parent post or express themselves clearly. Elizabeth's post here touches on many similar themes, and although she bounds the scope of the post significantly (that she is only talking about EA Vegan advocacy and a general trend amongst commentators writ large instead of a problem of individuals), I suspect that she is at least at times annoyed/frustrated/reluctant to put forth the work involved in carefully disentangling confusing disagreements with commenters.

I can't solve the big problem. I was hoping to give Elizabeth permission to engage with me in a way that feels less like work, and more like a casual conversation. The sort of permission I was giving is explicitly what Duncan was asking for (e.g. context-less links to the sequences) and I imagine I would want at least some of the time as a poster.

I realize that Elizabeth and Duncan are different people, and want different things, so sorry if I gave you something you didn't want, Elizabeth.

Regardless, thank you for taking me up on my offer of responding with an emote expressing confusion rather than trying to resolve whatever confusion you had with a significant number of words, per https://www.lesswrong.com/posts/aW288uWABwTruBmgF/?commentId=hgx5vjXAYjYBGf32J. (misunderstood UI).

Comment by Stephen Bennett (GWS) on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem · 2023-09-29T01:35:16.543Z · LW · GW

I encourage you to respond to any comment of mine that you believe...

  • ...actively suppresses inconvenient questions with "fuck you, the truth is important."
  • ...ignores the arguments you made with "bro read the article."
  • ...leaves you in a fuzzy daze of maybe-disagreement and general malaise with "?????"
  • ...is hostile without indicating a concrete disagreement of substance with "that's a lot of hot air"
  • ...has citations that are of even possibly dubious quality with "legit?". And if you dig through one of my citations and think either I am misleading by including it or it itself is misleading, demonstrate this fact, and then I don't respond, you can call me a coward.
  • ...belittles your concerns (on facebook or otherwise) with "don't be a jerk."
  • ...professes a belief that is wholly incompatible with what I believe in private with "you're lying."

Since I expect readers of the comment chain to not have known that I gave you permission, I'll take the work of linking to this post and assuring them that I quite literally asked for it. You're also welcome to take liberties with the exact phrasing. For example, if you wanted to express a sharper sentiment in response to your general malaise, you might write "???!!!!??!?!?!?", which I would also encourage.

I doubt that this would work as a normative model for discourse since it would quickly devolve into namecalling and increase the heat of the arguments without actually shedding much light. I also think that if you were never beholden to the typical social rules that govern the EA forum and lesswrong, that you would lose some of the qualities that I most enjoy in your writing. But, if you see my name at the top of a comment, feel free to indulge yourself.

I don't think I've told you before, but I like your writing. I appreciate the labor you put into your work to make it epistemically legible, which makes it obvious to me that you are seeking the truth. You engage with your commenters with kindness and curiosity, even when they are detracting from your work. Thank you.

Comment by Stephen Bennett (GWS) on Petrov Day Retrospective, 2023 (re: the most important virtue of Petrov Day & unilaterally promoting it) · 2023-09-28T06:41:13.079Z · LW · GW

This seems like a fairly hot take on a throwaway tangent in the parent post, so I'm very confused why you posted it. My current top contender is that it was a joke I didn't get, but I'm very low confidence in that.

Comment by Stephen Bennett (GWS) on Assume Bad Faith · 2023-08-26T00:36:47.030Z · LW · GW

I'm not Steven, but I know a handful of people who have no care for the truth and will say whatever they think will make them look good in the short term or give them immediate pleasure. They lie a lot. Some of them are sufficiently sophisticated to try to only tell plausible lies. For them debates are games wherein the goal is to appear victorious, preferably while defending the stance that is high status. When interacting with them, I know ahead of time to disbelieve nearly everything they say. I also know that I should only engage with them in debates/discussions for the purpose of convincing third party listeners.

It is useful to have a term for someone with a casual disregard for the truth. Liar is one such word, but also carries the connotation of accusing them that the specific thing they are saying in the moment is factually incorrect - which isn't always true with an accusation of bad faith. They're speaking without regard to the truth, and sometimes the truth aligns with their pleasure, and so they say the truth. They're not averse to the truth, they just don't care. They are arguing in bad faith.

Comment by Stephen Bennett (GWS) on Would you pay for a search engine limited to rationalist sites? · 2023-08-03T16:26:39.883Z · LW · GW

That's the wrong search query, you're asking google to find pages about the Ukraine War that also include mentions of the term "rationalist"; you're not asking google to search for rationalist discussions of the Ukraine War. Instead I'd do something like this.

Comment by Stephen Bennett (GWS) on Autogynephilia discourse is so absurdly bad on all sides · 2023-07-27T23:32:54.106Z · LW · GW

In the paper, they claim to be responding to people such as Charles Moser and Scott Alexander, and as I said Charles Moser and Scott Alexander are talking about AGP in trans women.

From my understanding, they're talking about AGP in natal males of any kind as compared to AGP in cis women. Scott and others found evidence of "yes, cis women have some AGP", whereas they find that the degree to which cis women have AGP is much less than those for whom AGP is a major component of their sexual life. I don't think it's crazy to then go on to say "no, really, when we talk about AGP in natal males we're talking about something distinct from the typical sexual experience of cis women".

As I described in the post, I think it's dishonest because of the greater context of the debate.

If you want to make this argument, you have to actually make this argument, which I did not see you do in the post. As I said in my initial criticism, "Are they using this specific claim elsewhere to do something that isn't actually supported by this paper? That would be the problem, not what they're up to [in this paper]."

Comment by Stephen Bennett (GWS) on Autogynephilia discourse is so absurdly bad on all sides · 2023-07-24T07:03:53.612Z · LW · GW

Even if the specific point of AGP in cis women doesn't move you much (I don't think it should[2]), this dysfunctional discourse might make you tempted to infer that Blanchardians do a lot of other shenanigans to make their theories look better than they really are. And I think you would be right to make that inference, because I have a lot of points of critique on my gender blog that go unaddressed.[3] But my critiques aren't the core point I'm raising here, rather I'm pointing out that people have good reasons to be exhausted with autogynephilia theorists.

I strongly dislike this paragraph since it seems to me to optimize for heat over light. If I were strongly convinced that the Blanchardian camp were up to no good, then I wouldn't be as put off by dismissing the entirety of their work from a single case of malfeasance. However, I don't think you come close to demonstrating that in the preceding post, so when you try to convince me that I should be exhausted with them (and imply that I should therefore ignore them), I'm peeved.

Reluctantly, I'll spend a minute skimming the actual Blanchardian 2022 paper you linked since that'll help me make an informed decision about how much "shenanigans" are in fact happening.\

.

From what I can tell, they're merely saying that the AGP group and the female group are indeed rather different from eachother in terms of how much AGP they have. My main criticism is something like "well no shit", but I don't really see how you can take that and then say that they're up to no good. Are they using this specific claim elsewhere to do something that isn't actually supported by this paper? That would be the problem, not what they're up to here.

It seems to me that this is basically a semantic debate about what it means to "have AGP". If you take AGP to mean something like "a fetish that you regularly masturbate to", then I don't think it's terribly surprising that females typically don't "have AGP". Now, you might wanna push back on that definition of AGP. Sure, go for it. You might claim that the actual rate of females with AGP is high enough that it has significant overlap with the trans women with AGP (Sure, go for it). Your post here, however, seems to go a step much further and assume that I'm convinced that they're pretty egregiously wrong here and then leverage that presumed-convinced-ness into thinking they're all a bunch of grifters. I don't like that, and I trust you less because of it.

Comment by Stephen Bennett (GWS) on Automatic Rate Limiting on LessWrong · 2023-06-24T00:10:25.424Z · LW · GW

If you're coming from the Rest Of The Internet, you may be surprised by hard far LessWrong takes this.

 

I believe this should say "surprised by how far"

Comment by Stephen Bennett (GWS) on The way AGI wins could look very stupid · 2023-05-14T04:57:08.655Z · LW · GW

Counterpoint while working within the metaphor: early speedruns usually look like exceptional runs of the game played casually, with a few impressive/technical/insane moves thrown in.

Comment by Stephen Bennett (GWS) on Killing Socrates · 2023-04-12T22:18:21.797Z · LW · GW

Would you actually prefer that all the jesters left (except the last one)?

I believe you when you say that interacting with the jesters is annoying in the moment. I trust that you do indeed anticipate having to drudge through many misconceptions of your writing when your mouse hovers over "publish". If you'll indulge an extended metaphor: it seems as though you're expressing displeasure at engaging in sorties to keep the farmland from burning even though it's the fortress you actually care about. People would question the legitimacy of the fortress if the surrounding farmland were left to burn, after all, so you feel forced to fight on unfavorable terrain for lands you barely care about. Would you find posting more satisfying if no enemies showed up at all?

Suppose that the jesters' comments, along with the discussion spawned from them, were deleted from existence, replaced by nothing. You never read them, any jester-ish thoughts are set aside after reading the post (although the person keeps their niggling thought that something is wrong with the post), and they cannot influence the culture of lesswrong as a whole. What does the comments section of your posts actually look like?

You leave unsaid that a meaty and genuine discussion would remain, but I expect that's approximately what you implicitly envision. I'm not so sure that's what would actually happen. Many of the fruitful discussions here are borne out of initially minor disagreements (Indeed, caring about burdensome details is a longstanding lesswrong tradition!). If you picked all the weeds, would a vibrant garden or a barren wasteland remain?

You are one of the most popular writers on lesswrong, so perhaps it is difficult for you to imagine, but if I wrote something substantial and effortful I would be more worried that it would be simply ignored; far more than I would worry about criticism that does not get to the heart of what I wrote.

Comment by Stephen Bennett (GWS) on LW Account Restricted: OK for me, but not sure about LessWrong · 2023-04-12T22:17:52.052Z · LW · GW

Is amelia currently able to respond to your comment, or is she unable to respond to comments on her post because she posted this? If so, that seems like a rather large flaw in the system. I realize you're working on a solution tailored to this, but perhaps a less clunky system could be used, such as a 7/week limit?

Comment by Stephen Bennett (GWS) on Exposure to Lizardman is Lethal · 2023-04-01T04:15:08.098Z · LW · GW

Yeah I agree, I think your post points at something distinct from Eternal September, but what Raemon was talking about seemed very similar.

Comment by Stephen Bennett (GWS) on Exposure to Lizardman is Lethal · 2023-04-01T04:04:56.173Z · LW · GW

This sounds like Eternal September to me.

Comment by Stephen Bennett (GWS) on I Have No Sense of Humor and I Must Laugh · 2023-04-01T01:25:04.435Z · LW · GW

One of my friends studied humor for a bit during his PhD, and my goodness is it difficult to get the average person to be funny with just "hey, tell me a joke" type prompts. Even when you hold their hand, and give them lots of potentially humorous pieces to work with (a-la cards against humanity), they really struggle. So, I'm honestly reasonably impressed with GPT-4's ability to occasionally tell a funny joke.

Comment by Stephen Bennett (GWS) on What problems do African-Americans face? An initial investigation using Standpoint Epistemology and Surveys · 2023-03-12T21:57:05.867Z · LW · GW

By the way, I disagree with the assumption that Aumann's theorem vindicates any such "standpoint epistemology".

That also stood out to me as a bit of a leap. It seems to me that for Aumann's theorem to apply to standpoint epistemology, everyone would have to share all their experiences and believe everyone else about their own experiences.

Comment by GWS on [deleted post] 2023-02-07T17:22:17.859Z

Fair enough. If I were to pay attention to them, that is probably what I would do. Fortunately I do not have to pay attention to them, so I can take their mockery at face value and condemn it for being mockery.

Comment by GWS on [deleted post] 2023-02-07T17:03:53.044Z

Yes, I even find most criticism useful.

Comment by Stephen Bennett (GWS) on I hired 5 people to sit behind me and make me productive for a month · 2023-02-07T09:27:33.008Z · LW · GW

I have never clicked on a link to sneerclub and then been glad I did so, so I'll pass.

Comment by GWS on [deleted post] 2023-02-07T09:06:38.346Z

Sneerclub is interested in sneering at me, it is not interested in bettering me. Why should I interpret their mockery as legitimate criticism?

Comment by Stephen Bennett (GWS) on Fucking Goddamn Basics of Rationalist Discourse · 2023-02-06T08:28:16.611Z · LW · GW

If I'm reading this right, you object to Jensen's initial comment that uses "cringy" and that your objection is largely due to the fact that "cringy" is a property mostly about the observer (as opposed to the thing itself).

Do you think the same is true of "mind-killy" from logan's comment?

This seems hypocritical to me. I think that your real objection is something else, possibly that you just really don't like "cringy" for some other reason (perhaps you cringe at its usage?)

(I wrote a bunch more words but deleted them - let's see how nondefensive {offensive?} writing works out for me).

Comment by Stephen Bennett (GWS) on Fucking Goddamn Basics of Rationalist Discourse · 2023-02-06T08:11:58.770Z · LW · GW

I used to have a lot more fun writing, enjoying the vividness of language, and while I thank LessWrong for improving many aspects of my thinking, it has also stripped away almost all my verve for language. I think that's coming from the defensiveness-nuance complex I'm describing, and since the internet is what it is, I guess I'd like to start by changing myself. But my own self-advice may not be right for others.

I have about a 2:1 ratio of unsubmitted to submitted comments. The most common source of deletion is no longer really caring about what I have to say, the second is fending off possible misinterpretations. So I definitely understand just giving up. This seems like it'd make me pretty down on anticipated critique, but I think a good 5-10% of those comments would be net negative so it's not like it's all downside.

I remember that I used to write with vigor - I'd really enjoy flushing out what it is I thought and letting the words pour from my fingers. At some point, I think it was in high school, I got a writing assignment back from the teacher and the sum total of the comments were (paraphrased) 'Very clear voice, no one could have written this but you! B-.' I've never gotten good marks on writing assignments, but that one in particular has stuck with me. While it's hilarious, it's amusing to me in the sort of way that also makes me disinterested in writing. I really do feel like I've lost a big part of that spark. Very little of it has to do with that one particular comment, but more a general erosion of expected charity. If I anticipate that my words will be taken badly, then the space of ideas I can explore are either limited to the mundane or it requires a gargantuan effort to construct the well fortified arguments necessary to repel the hypothetical critic.

At the risk of giving you advice that I myself regularly fail to follow: perhaps ignore the critics?

I know it doesn't wash away the cumulative effects of any curmudgeons, but I do appreciate what you wrote here.

Comment by Stephen Bennett (GWS) on I hired 5 people to sit behind me and make me productive for a month · 2023-02-05T05:57:50.601Z · LW · GW

I'm not sure what happened here, but if I had to guess (in order of likelihood, not all are mutually exclusive):

  • Bad joke (accident)

  • Got flustered, said the first thing that popped into her head

  • Bad joke (on purpose)

  • Flirting

  • Was actually watching porn, and thought that coming clean would in some way be better, or that saying the truth but in a weird way would mask the truth

  • Wanted to get fired but didn't want to quit, somehow this was more socially acceptable than quitting

Comment by Stephen Bennett (GWS) on What fact that you know is true but most people aren't ready to accept it? · 2023-02-03T19:30:15.048Z · LW · GW

[Quote removed at Trevor1's request, he has substantially changed his comment since this one].

I expect that the opposite of this is closer to the truth. In particular, I expect that the more often power bends to reason, the easier it will become to make it do so in the future.

Comment by GWS on [deleted post] 2023-01-31T23:27:08.337Z

This post does three things simultaneously, and I think those things are all at odds with one another:

  • Summarizes Duncan Sabien's post.

  • Provides commentary on the post.

  • Edits the post.

First, what is a summary and what are its goals? A summary should be a condensed and context-less version of the original that is shorter to read while still getting the main points across. A reader coming into a summary is therefore not expected to have any knowledge of the reference material. That reader shouldn't expect the level of detail that the source material has, and so it's fine if the summary has some visible wrinkles that would be smoothed over with more words. IMO a summary should only sparingly quote the source material, but this might just be a stylistic preference. A summary should never contradict the source material.

If it's intended to be an edit of the original, but made more brief, then I dislike that you made what I see as substantive changes. You should retain original formatting (e.g. italics) and phrasing when possible if you are simply deleting what you see as nonessential words to shorten the post. Some rephrasing may be necessary if you delete something that is referenced elsewhere, or to avoid grammatical errors.

The opening two sentences of your post muddy these three types of writing:

This is a linkable resource intended to be fairly straightforward, for a culture of clear thinking, clear communication, and collaborative truth-seeking.

This is what an edit would look like; it is speaking as though it is the original and not a summary. If you were intending to summarize the original post then a you could write something like "This is a summary of [post] intended to be a linkable resource for a culture of clear thinking, clear communication, and collaborative truth-seeking."

You continue:

[removed quote from that one lady.]

This is commentary. A summary shouldn't include this line because it is trying to minimize the amount of text and there's no expectation that a summary post would use the same formatting or verbiage of the original. An edit wouldn't include this either, since it would either change the quote to something else the editor finds more appealing or just skip it entirely. Instead, this is a response to what was contained in the original. The only interpretation that I can think of is that this is you chastising Sabien for including the line in the first place, but perhaps that is a failure of imagination on my part.

I think a genuine summary of Duncan's post could be useful, but I do not like the commentary/summary/edit trifecta. I cannot tell which parts are written by you and which parts are quoted from the original. I cannot distinguish between your opinion, your summary of Duncan's opinion, and your response to Duncan's post. All of these make this post much more difficult to engage with on its own merits, which is ruinous for a summary.

Comment by Stephen Bennett (GWS) on How it feels to have your mind hacked by an AI · 2023-01-21T18:34:50.053Z · LW · GW

I expect that if you actually ran this experiment, the answer would be a point because the ice cube would stop swinging before all that much melting had occurred. Additionally, even in situations where the ice cube swings indefinitely along an unchanging trajectory, warm sand evaporates drops of water quite quickly, so a trajectory that isn't a line would probably end up a fairly odd shape.

This is all because ice melting is by far the slowest of the things that are relevant for the problem.

Comment by Stephen Bennett (GWS) on Flying With Covid · 2023-01-19T02:52:22.720Z · LW · GW

I was feeling the beginning of sickness (slight fever, runny nose, scratchy throat) while at the airport around a year ago when returning from a trip. I made the same decision you did: prioritized masking, distance where feasible, and getting home as quickly as possible instead of taking on ~$1k of hotels/food to wait until I was healthy. I think I made the right decision and agree with yours here.

It turned out I had the ordinary flu, not covid. I don't think the prosocial decision making is substantially different between the flu & covid at this point in time.

Comment by Stephen Bennett (GWS) on What is the best way to approach Expected Value calculations when payoffs are highly skewed? · 2022-12-28T21:35:54.389Z · LW · GW

It is possible for a lottery to be +EV in dollars and -EV in utility due to the fact of diminishing marginal utility . As you get more of something, the value of gaining another of that thing goes down. The difference between owning 0 homes and owning your first home is substantial, but the difference between owning 99 homes and 100 homes is barely noticeable despite costing just as much money. This is as true of money as it is of everything else since the value of money is in its ability to purchase things (all of which have diminishing marginal utility).

The diminishing value of money is borne out in studies that look for the link between happiness/life satisfaction and income. Additional income almost always improves your life, but the rate of that improvement is approximately at the log scale (i.e. multiplying your income by 10 gives you +1 happiness, regardless of what your income was).

What does this all have to do with a lottery? Well, a lottery gives you a small probability of a massive number of dollars at a fixed cost. Since the hundred millionth dollar is worth much less to you than the first dollar, this can be a bet that has negative expected utility even when you would make money on average.

Comment by Stephen Bennett (GWS) on Shared reality: a key driver of human behavior · 2022-12-27T04:48:56.443Z · LW · GW

it's confusing other people don't have this objection

For me, the cow has left the barn on "reality" referring only to the physical world I inhabit, so it doesn't register as inaccurate (although I would agree it's imprecise). "Reality" without other qualifiers points me towards "not fictional".

"emotional resonance" ... "shared facts" or "shared worldview"

I notice I'm resistant to these proposals, but was pretty happy about the term "shared reality". Here are some things I like about "shared reality" that I would be giving up if I adopted one of your suggestions:

  • Reality is immediate and brings my attention to what's in front of me. For example, in "the reality of the situation is that not everyone will have a place to sit down if we have the party at my place". Here, "reality" is serving as a term that means "the space of possibilities that are laid out in front of us" (it excludes things like outlandish situations or anything that could have been done before now).

  • "Shared reality" as a term sounds nice to me; it rolls off the tongue. As a result, it's the sort of thing that I would use in casual conversation.

  • Shared reality spans both emotional content and statements about the physical world.

I can't think of a term that hits these points well that isn't reality, but perhaps you can think of something I missed. Of your proposed terms:

  • Emotional resonance hits (II) but fails (I) and (III).

  • Shared facts hits (I) but misses (II) and (III) for me.

  • Shared worldview hits (II) and (III), but is so far from (I) that I imagine I'd have a similar hangup as you do with 'shared reality' if I heard someone use that term to describe the experience of oneness when singing along at a concert.

Comment by Stephen Bennett (GWS) on Looking Back on Posts From 2022 · 2022-12-26T18:39:45.515Z · LW · GW

No one clicks on links, maybe ~25% of users click even one in a giant post.

Two comments, with detail below: (1) make sure you have the relevant denominator and (2) be careful about taking action based on this information.

(1) What counts as a user in this context? Someone who comes to the page, reads a sentence, and then closes the page wouldn't even have time to click a link, for example, but they don't represent who your readership actually is. Similarly, users can end up double counted where, for example, they read through the post on their phone, and then come back on their computer to copy a quote or comment. I expect the relative numbers of link-clicking to be a useful comparator between blogs, but I'm not sure how to make sense of that number in a vacuum.

(2) Supposing this is basically true, does it change how you want to write? I expect it depends on who you are writing for, but I predict that the quality of your readership would go down if you didn't link to your sources.

Comment by Stephen Bennett (GWS) on Shared reality: a key driver of human behavior · 2022-12-25T10:49:36.470Z · LW · GW

You seem to be framing shared reality as implicitly competitive, where individuals must assert or demur on what something is or means. If you fix a component of reality this can be somewhat true, but I think this will tend to make people think of the totality of possible realities under discussion as fixed. As a result, you seem to focus on control over territory on the small island of reality, whereas I would describe it as simultaneously paying attention to the same drop of water coming out of a fire hose. The adopt/push dichotomy also seems a poor match for OP's experience, such as here:

With practice, we found a way to earnestly share and witness each others' experience that gave the warm-fuzzies of connection, without feeling forced to shape our experience to match the other.

Relatedly, you seem to be claiming that individuals can only see reality from a single perspective. This doesn't seem right - people seem to be fully capable of containing conflicting perspectives about a single thing simultaneously (internal family systems is a framework where this is especially obvious).

It looks like you intended for the methods of achieving a shared reality to be exhaustive, but IMO the easiest way to create a shared reality is to genuinely experience the same thing at the same time in the same way as someone else. OP's description of being in a concert, for example, seems a weird activity to put into "prefer to interact with people who already live in the same reality". Instead, it seems more about creating contexts in which you and others will experience the same reality.