Split and Commit
post by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-11-21T06:27:07.557Z · LW · GW · 34 commentsContents
Split and Commit Appendix None 34 comments
This is an essay describing a basic sanity-inducing mental movement that I use approximately ten times per week, and suspect other people would benefit from adopting (and regularly reminding each other to do). I've talked about it elsewhere, but until now it didn't have its own linkable reference post.
P1: Things are often not what they seem.
This can be because the seeming itself is broken, e.g. the progression that goes:
"Well, here I am in 1700's America and it sure seems to me that these black folk are fundamentally intellectually and morally and spiritually inferior to white folk" or "Well, here I am in 1800's America and it sure seems to me that these women are constitutionally incapable of holding political office" or "Well, here I am in 1900's America and it sure seems to me that these gay men are an active threat to everyone's safety, including our women (somehow) and children" or "Well, here I am in 2000's America and it sure seems to me that ████ ██ ██████ ██ ████ ████████████ █ ███ ██ ██████ ██ ████ █████ ████████ ██ ████ ███ ████ ████ ██████ ███."
(Or "Well, here I am inventing physics and it sure seems to me that force consistently and exactly equals mass multiplied by acceleration.")
It can also be because the seeming is generally correct, but there exists variance. If 95% of the marbles in a bag are red, and 5% of the marbles in the bag are green, it's correct to guess that the next randomly selected marble will be red, because betting on red makes you the least likely to be wrong. But that doesn't mean that it can't be green, or that you should be shocked if it's green. If you bet on red the whole way through the bag, you know you'll be wrong five percent of the time.
P2: Every observation is evidence in favor of more than one hypothesis.
If my friend tells me my hair looks good, this is an update in favor of my hair looking good and an update in favor of my hair not looking good but my friend lying to me for various reasons (kindness, prank). In order to know which update is larger, you need some other information about what kind of person my friend is, and what our relationship is like.
If I see some breaking news about some person having maybe done something horrible, this is an update in favor of that person having done something horrible, and an update in favor of that person being the target of a smear campaign/conspiracy, and an update in favor of the presented claim being technically correct but there being a ton of other relevant context and facts that will substantially change the gestalt of the story. In order to know which update is larger, you need some other information, etc.
(Many of my readers will be thinking "this is just Bayes," and yep, it's just Bayes.)
The upshot of these two premises is that it's wise to have at least two active hypotheses running at all times, for any question of import.
By default, it seems to me that most humans have only one, i.e. not a hypothesis at all but a singular belief.
This is true (in my experience) even of the type of person who is acutely aware of their own fallibility, and who acknowledges that they are occasionally (or often) mistaken. Even people who are savvy enough to try to put a number (like "85% confident") on it, such that they can track their calibration over time, tend to leave the remaining swath of possibility un- or under-specified.
They will admit that they might be wrong in some vague fashion about [free speech, COVID policy, the president, existential risk from artificial intelligence, Kyle Rittenhouse, universal basic income, Leverage Research, the importance of religion, what the person who wrote that comment was obviously really saying, never mind their actual words], etc., but that doesn't stop them from having only a single real guess in many situations where that seems (to me) to be wildly premature, and an open invitation to all sorts of known and problematic bias.
There's a huge difference between [having an answer which you are virtuously prepared to abandon, if forced] and [keeping two or more distinct possibilities firmly in focus, even as you track that one of them is substantially more likely than the others].
There's a difference in how that feels, and there's a difference in how it influences one's reactions to new and relevant information. Choosing a single possible world-state and then looking to see if it's compatible with the available evidence is very different from looking at multiple possible world-states and then asking yourself what sorts of evidence would rule each one out.
To be clear: it's true that one (often, regrettably) has no choice but to do the equivalent of [placing an unambiguous bet] when it comes to taking actions. Sometimes, you simply have to behave as though the most likely outcome is what's going to happen—to choose [actions that will pay out if the marble is red, and cost you if the marble is green] over [actions that will pay out if the marble is green, and cost you if the marble is red]. There are many situations where we cannot afford to sit back and do nothing while we wait for more information to come in, and in many of those situations we have to pick a single exclusive strategy and run with it. You can't always hedge.
But the fact that one must (often, unfortunately) take singular action doesn't mean that one can't hold nuanced beliefs. You can put your money on red without losing track of the true fact that the next marble out of the bag could easily be green.
Split and Commit
Here, then, is the recommendation:
When you encounter [evidence] that sure looks to you like it implies [X], then rather than simply switching into "evaluate [X]" mode, you split and commit.
By "split," I mean that you explicitly ask yourself both:
"What kind of world contains both [evidence] and [X]?"
and also:
"What kind of world contains both [evidence] and [not-X]?"
Don't just focus on the world where things are what they seem to be, at first blush. Feel free to notice that one possibility seems pretty darn likely, but hold yourself to the standard of seriously and concretely considering at least one other possibility.
"If it turns out that this isn't what it looks like, what's the next most likely story that's still consistent with [evidence]?"
And by "commit," I mean that you choose a preliminary reasonable-to-you response in each of those possible worlds.
"If this is indeed what it looks like, then I should probably do something like [A]. If it's not, though, then [A] would be bad/counterproductive, and I should instead respond with something like [B]."
This commitment doesn't have to be public. It doesn't have to be set in stone. It can be conditional, depending on the various possible states of your next piece of evidence.
The key thing is simply that it exist at all. That you set aside the additional thirty seconds it takes to specifically and concretely dignify the possibility that things are not what they currently appear to be, and make a rough draft of what right action looks like, in that world. That you do this habitually, so that in those times when they aren't what they seemed at first glance, you're primed to notice, and ready to respond.
Or, if I might embed my tongue firmly in my cheek: if running this algorithm seems to you like a dumb or not-worth-it idea, then fair enough, but...what ought you do in the world where it just looks dumb, and actually isn't?
Appendix
The following are some responses from people who encountered the split-and-commit tool in earlier essays and had concrete things to say about it.
Rob Bensinger:
A benefit of split-and-commit I'm surprised wasn't high on your list: people often want to hedge their bets and pick stances and policies that internally feel justified/OK regardless of what the outcome is—they like being able to strategically switch between 'that looks bad' and 'that is bad'. Split-and-commit makes it easier to catch yourself doing this and discourage anyone from doing it.
Irena Kotíková:
One if my favourite concepts of all time. Especially because I often notice a ton of resistance to keeping the commitment once the split happens.
Marcello Herreshoff:
So the way I see it, it feels more like split and commit has a minimum of three plans. You need the third plan to tell you what experiment to do to figure out which of the two worlds under consideration you live in. Otherwise tomorrow could easily look like "yep; it still seems like things are as they seem, time to execute plan one!"
Logan Strohl:
Immediately after I finished reading this, I practiced the very first step in a training progression for “split and commit” on a walk from my house to the library.
Here’s the exercise I did:
While walking, my attention will happen to land on things. When it does, I’ll run through the following structure: It looks like x. It might instead be that y. If it’s what it looks like, I will p. If y, I will q.
Some examples I happen to remember:
- It looks like that’s a building. It might instead be an alligator. If it’s what it looks like, I’ll just walk by it. If it’s an alligator, I’ll go get my neighbor to test whether there is in fact a fucking alligator.
- Seeing that guy from the back, it looks like he’s smoking a cigarette. It might be that he’s not smoking a cigarette. If he is, I’ll walk by him and hold my breath. If he’s not, I’ll walk by him and breathe normally.
- That looks like a magnolia tree. It might be a tree I’m unfamiliar with. If it’s a magnolia tree, I’ll expect to keep seeing things I associate with magnolia trees if I keep inspecting it. If it’s a tree I’m unfamiliar with, I’ll expect to encounter things I don’t associate with magnolia trees if I keep inspecting it.
Notes on my experience of the walk:
- Often when I identify an alternative interpretation of some observation, the course of action I commit to is the same in either case, but I come away with a feeling of having learned something anyway.
- It looks to me like most reasonable responses are expectations of future experiences, rather than physical interventions or policy changes. It may be that this changes when the motion “split and commit” is taken only when I encounter an appropriate trigger, rather than arbitrarily. (I notice myself automatically deciding on courses of action given either state of affairs; yes good.)
- The one-second version of split and commit involves going back-and-forth two times. The first time you feel the thing you’re seeing from your default perspective, and then flip over to a perspective where you feel its meaning from a different perspective. The second time, you occupy the first perspective while imagining a course of action to take in response, then flip over to the other perspective and imagine an appropriate course of action from there. I started doing the one second version after about seven minutes of practice, which was about three minutes after it became available. I stuck to the slow version for the extra three minutes to make sure I knew what I was doing.
- There are a lot of alternative interpretations of the same observations, although there may only be a small number that explain the observation about equally well. There is often a moment where I must choose whether to keep generating interpretations. It seems to me from my experience practicing this so far that most of the benefit most of the time comes from identifying a single alternative interpretation, followed by plans of action for each interpretation (although in fact bothering to identify an alternative interpretation is all by itself most of the thing). I have a feeling that there are some kinds of situations where it is wiser to generate several interpretations before committing to courses of action; I will file this under “followup study”, and continue to focus on “split and commit”.
As usual, humans are difficult and complicated and I’m glad I began to practice mostly in their absence, even though this was presented [in its original context] as primarily a social skill. I think focusing on humans should probably be part three of the training progression.
(Part two of the training progression should be to identify and train the triggers for “split and commit”; obviously splitting and committing for absolutely anything my attention happens upon is only useful if I want a rapid-fire training session for the motion itself. The motion is best taken in response to certain kinds of experiences, and the next thing I need to know are which experiences indicate that it’s an especially good time to split and commit.)
34 comments
Comments sorted by top scores.
comment by TropicalFruit · 2021-11-24T07:45:55.786Z · LW(p) · GW(p)
Just reading the first section of this article exposed how much I emotionally enjoy having just my one hypothesis. It never occurred to me until now that, even though my held hypothesis is almost always justified, the fact that I no longer hold alternatives in focus is suboptimal.
That lack of focus on the other hypotheses feels good though... when my models of the world are accurate, it makes me very happy, so I don't like entertaining the possibility that it's some sort of long tail random event.
Anyway, might as well start unwinding the problem now, so here it goes... (this is going to hurt).
Here are three alternative Bitcoin hypotheses I'm going to watch out for:
- Governments have the power to crush Bitcoin whenever they want, but many actors in government own it, so they're holding off. Once it becomes a real threat, those actors will sell, and then the bans will come.
- I'm wrong about the way assets and store of value work, and Bitcoin will never truly compete with gold, real estate, stock indexes, or bonds as a long term store of value.
- I'm wrong about the network effect of Bitcoin, the value of true decentralization, and the superiority of proof of work. Proof of stake coins will replace Bitcoin long term.
This shouldn't have been painful to write, but it was. As a mid 20s American having their wealth constantly plundered by the state, Bitcoin gives real hope for the future.
Even though I truly believe these hypotheses are less likely than Bitcoin succeeding, the truth is, the reason I don't consider them is because I really, really don't want them to be true, not because their probability is so low that they aren't worth the mental effort.
Replies from: madasariocomment by Gunnar_Zarncke · 2021-11-21T21:06:08.888Z · LW(p) · GW(p)
A way to practice this is to go on Twitter where a "preferred" interpretation X is often more salient. There is also often an immediately obvious alternative explanation not-X.
Let's do this:
Could the famous mysteries of quantum mechanics be explained if physics runs on a blockchain? Consider: Schrodinger's Cat could reflect a delay in reality "finalizing" until there's a sufficient number of experimental "confirmations"
-- https://twitter.com/ESYudkowsky/status/1462212853511852033
Interpretation X: If it could my model of physics (or blockchain) would be off. If so, I should look more deeply into it.
Interpretation not-X I: If it could not my model of the world model of EY is off. Not very likely - I should look for further explanations.
Interpretation not-X II: EY is trying to communicate something else, maybe hint at relevant though not exactly matching correspondences. This seems most likely. But this is a topic that I am not too deeply interested in and so would ignore. But if I see more evidence of people engaging in the direction of X I would pick that up again.
The two worst problems afflicting universities — rapidly increasing cost and rapidly decreasing freedom of expression — share a common cause: the growth in the number of administrators.
It turns out this is more like Logan's examples: Whether the common cause is administrators or not seems to have little impact on my actions. I think the right action here is to suspend judgment and await more info (e.g. on ACX).
Amazing alternate way to think about patents! @elonmusk
(was reshared by paulg, though it seems to be from 2014)
Context: I have layman experience in patents, just having reviewed a handful of patent applications.
Interpretation I: Elon Musk is generally open-sourcing patents. Patents are not a good idea. Low probability, otherwise I would have heard about open-sourcing Space-X patents too. But generally matches with recommendations about patents I heard about startups. Innovation speed is key. If so I should also be less inclined to consider patents in general.
Interpretation II: Elon Musk is selectively open-sourcing Tesla patents. Assumption: This is beneficial to Tesla but not (yet?) for Space-X. Maybe because the industry is bigger. If so I should consider the context when deciding for patents.
Interpretation III: The open-sourcing of Tesla patents is not really about patents but some other type of communication strategy. Maybe there were not that many key patents to begin with. With Musk this seems like a reasonable guess too. If so I should reduce weight on facts communicated by industry leaders in general.
Replies from: ChristianKl, dan-weinand↑ comment by ChristianKl · 2021-11-23T10:58:33.505Z · LW(p) · GW(p)
For most of it's history Space X wasn't filling patents. They only did really start with that in 2019 (https://insights.greyb.com/spacex-patents/). I expect that a key motivation of why they started with that is to be able to use them defensively against Blue Origin.
↑ comment by Dan Weinand (dan-weinand) · 2021-11-23T00:15:25.311Z · LW(p) · GW(p)
Note that it might be very legally difficult to open source much of Space-X technology, due to the US classifying rockets as advanced weapons technology (because they could be used as such).
comment by Ericf · 2021-11-22T02:28:06.244Z · LW(p) · GW(p)
Just to be clear, you should estimate the likelihood of your top 2 explanations (eg 70% and 10%) and then pick the behavior that leads to the best weighted outcome (eg cross the street to avoid the un-leashed mangy dog even though it's probably friendly since the cost of being wrong is disproportionately high, but don’t call animal control since it's probably someone's pet and they will be along shortly)
comment by LVSN · 2021-11-21T11:11:39.457Z · LW(p) · GW(p)
Ten billion times Yes.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2021-11-21T14:30:46.583Z · LW(p) · GW(p)
Or! This idea sounds superficially reasonable and even (per the appendix) gets praise from a few people, but is actually useless or harmful. Currently working out a hypothesis for how that could be the case...
Replies from: LVSN, countingtotencomment by Measure · 2021-11-22T01:06:39.718Z · LW(p) · GW(p)
I suppose it's possible that the house elves are to blame. I'll make sure to consider this alongside my other theory.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-11-22T02:10:33.274Z · LW(p) · GW(p)
... I'm assuming this is a joke, but just in case it isn't:
If the only other theory one is capable of coming up with is "the house elves are to blame," one has a Serious Problem as an aspiring rationalist, and making a general habit of split and commit will help one to develop a muscle that will come in handy.
I am reminded (quite memorably) of a colleague I once worked with who had a self-image of being a competent rationalist, but was frequently literally incapable of coming up with non-straw reasons why someone would have [a model different from his own] about any number of situations. Like, he actually tried, and was several times unable to get past "house elves are stealing our magic" phase.
"I can't think of any reason why this would be upsetting to someone. My best guess is that you're so depressed that you're delusional."
"I can't think of any reason you would take this position except maybe that your romantic partner has literally brainwashed you."
"The only justification I can think of for taking this position is that you're secretly trying to make a society where murder is okay."
... these are not exact quotes, but they're really actually quite distressingly close to exact quotes. Typical mind fallacy is a hell of a drug.
Replies from: Measure, countingtotencomment by Raemon · 2021-11-24T04:04:29.859Z · LW(p) · GW(p)
Curated. I've gotten value from the Split and Commit concept over the years and am glad to see a more succinct writeup. I think "have multiple hypotheses" and "have at least a rough sense of what you might do in worlds where either hypothesis is true" seems like a useful heuristic to avoid some common human rationality foibles.
I felt like the opening examples were a bit distractingly political and I think there are probably some ways to improve, but that felt relatively minor.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2021-11-24T06:00:16.369Z · LW(p) · GW(p)
I'm curious what examples you or others who found the opening examples distracting would prefer. Something like those examples is standard for describing moral progress, at least in my experience, so I'm curious if you would frame moral progress differently or just use other examples.
comment by cousin_it · 2021-11-22T16:18:34.430Z · LW(p) · GW(p)
A few years ago Abram and I were discussing something like this, and converged on "TC Chamberlin's essay about method of multiple working hypotheses is the key to rationality". Or in other words, never have just one hypothesis, always have a next best.
Replies from: ryan_b, Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2021-11-22T17:49:22.531Z · LW(p) · GW(p)
Link?
Replies from: MondSemmel↑ comment by MondSemmel · 2021-11-23T16:50:59.682Z · LW(p) · GW(p)
From the Google search: Here's the pdf of the "revised and somewhat shortened" version from 1897. The Google search results also link to several discussions of the essay.
T. C. Chamberlin's classic essay on "The Method of Multiple Working Hypotheses" was originally published in Science in 1890 and has been cited by virtually all who have struggled to define the scientic method. The version of the essay reprinted here was revised and somewhat shortened by Chamberlin and first published in The Journal of Geology in 1897.
comment by matto · 2021-11-22T01:43:39.684Z · LW(p) · GW(p)
This reminds me Edward De Bono's concept of PO:
PO is a device to bring about an arrangement or rearrangement of information not a device to judge the new arrangements or condemn the old ones."_
PO implies, 'That may be the best way of looking at things or putting the information together. That may even turn out to be the only way. But let us look around for other ways."_
(Po is something you say to your or another to signal "hey, let's stop and see if we rearrange information in a different way".)
And the reason why de Bono considered this an important instrument:
It is interesting that in our thinking we have developed methods for dealing with things that are wrong but no methods for dealing with things that are right. When something is wrong we explore further. When something is right our thinking comes to a halt That is why we need lateral thinking to break through this adequacy block and restructure patterns even when there is no need to do so.
I like the Split and Commit angle better though because it seems more practical in that it's a mini-framework or technique -- it's clear how to apply it to rearrange the information coming into the mind.
comment by bionicles · 2021-11-24T13:14:19.224Z · LW(p) · GW(p)
I often find thinking about the counterfactuals gives ideas for the factuals, too. Gave me a new insight into the value of fiction: negative training examples for our fake news detector; but the best fiction is not only fictional but also carries some deeper nugget of truth...
comment by madasario · 2021-11-24T11:32:13.423Z · LW(p) · GW(p)
In my head this breaks down like:
- Split: generate alternative hypotheses. "I believe X. What not-X might I come to believe, and why?"
- Commit: flesh out those hypotheses with contingent commitments for action. This is how you know you actually generated a real hypothesis rather than just "maybe the house elves did it, wow, this rationalist stuff is easy."
- Practice. It might be a great rationality workout to take very low-probability hypotheses seriously and build an action plan. "If I come to believe house elves did it, I will submit myself for psychological evaluation. If I pass that evaluation, I will dedicate my life to documenting their existence."
comment by Ruby · 2023-01-07T00:13:39.162Z · LW(p) · GW(p)
I was aware of this post and I think read it in 2021, but kind of bounced off it the dumb reason that "split and commit" sounds approximately synonymous with "disagree and commit", though Duncan is using it in a very different way.
In fact, the concept means something pretty damn useful, is my guess, and I can begin to see cases where I wish I was practicing this more. I intended to start. I might need to invent a synonym to make it feel less like an overloaded term. Or disagree and commit on matters of naming things :P
comment by Gunnar_Zarncke · 2022-03-09T23:25:18.523Z · LW(p) · GW(p)
One consequence of Split and Commit is that you become less legible to other people. It requires more mental effort to keep track of all the permutations or other people's stances. This makes cooperation harder.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-03-10T05:21:59.307Z · LW(p) · GW(p)
I have not received feedback that I seem less legible since adopting this norm, tho a) I might just not be told and b) you may be more pointing at a cost of widespread adoption.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2022-03-10T08:31:42.821Z · LW(p) · GW(p)
Yes. Some people can surround themselves with others where this works. But it might not scale.
comment by Alex Vermillion (tomcatfish) · 2024-06-07T16:31:08.093Z · LW(p) · GW(p)
Popping back in on this one after a while.
I really enjoy this. This can cut through a large number of annoying problems with one trick: If you split and commit and the actions for all the likely paths are essentially the same, you can ignore which thing is true for now.
"What if I'm a Boltzmann Brain?" If I am not, then I should proceed as normal. If I am, then it does not matter. --> Proceed as normal.
"What if my shirt is blue?" if it is not, then I'll wear it. If it is, then I'll wear it. --> Wear the shirt
It might sound silly, but I think Split and Commit is an easy-sell technique which can free you from a lot of unnecessary investigations automatically. Sure, it's also a good method for operating under uncertainty, but you get simplification for free.
Last note: I find this a good technique to explicitly invoke when someone close to me is anxious. We can build plans for the anxiety being true or not, but if it turns out to change nothing, we can ignore it.
comment by Fleece Minutia · 2023-03-25T20:22:23.592Z · LW(p) · GW(p)
Thank you for the nice post!
In the spirit of https://xkcd.com/208/, I'd like to share a completely imaginary story of how this might come in handy:
I open the door and yell "Hoooney, I'm home earl--heeey! What is going on here?"
"Uh oh, nothing, sweetheart," says the flustered, sweaty, naked honey. The door at the other end of the house slams loudly.
"Hmm," my mind says, "it sure does look like someone is having an affair. The third party left through the kitchen, leaving my partner there in a bit of a bind."
I struggle to contain the rising jealousy in my chest. Before it gets the better of me, I remind myself of the old rationality trick: Try to maintain at least one extra hypothesis.
"Oh, hold on, we've trained for this!" says my mind. "Split and commit! Make an alternative hypothesis! Look for disconfirming evidence! So what kind of world contains a flustered, sweaty, naked honey, a slamming door at the other end of the house, and NOT honey having an affair?"
There's no time for System 2 work, so I smile at honey and catch the first alternative hypothesis I see: I just walked in on my partner experimenting with hot naked yoga in secret, and the back door closed in the draft I caused when opening the front door.
My feeling of jealousy stops rising, unsure about what to do with itself now that my brain has two mutually incompatible hypotheses.
Fearing the worst and hoping for the best, I ask playfully: "Are you doing hot yoga without me?" and add more slowly: "...or is it time for... a talk?"
comment by Pattern · 2021-11-22T06:05:04.906Z · LW(p) · GW(p)
By default, it seems to me that most humans have only one, i.e. not a hypothesis at all but a singular belief.
Sort of. Let's say someone reads a newspaper. Let's call it The Daily Star. Yesterday, they thought it was a trustworthy publication, and believed the story it ran on its front-page to be accurate. Today they find out one of its journalists violated journalistic integrity (something to do with conflicts of interest and secret identities), and they question it's trustworthiness, and the veracity of yesterday's story.
They will admit that they might be wrong in some vague fashion about [free speech,
Value judgements.
comment by Chriswaterguy · 2022-08-17T18:17:09.806Z · LW(p) · GW(p)
A kind of contingency planning, but turning it into a habitual mental movement. That seems very valuable.
comment by countingtoten · 2021-11-24T05:45:30.305Z · LW(p) · GW(p)
https://threadreaderapp.com/thread/1368966115804598275.html
When Scott pointed out (for the first time, AFAICT) that Vassar is a dangerous person in the orbit of this community, it was the second report about such a person in as many weeks. This doesn't look to me like a pair of isolated incidents. It looks like part of a pattern.
Therefore, while your technique sounds like a good one in isolation, I suspect you're encouraging your overwhelmingly neurodivergent audience to double down on their inherent flaws, making these worse. (Another clue here is the fact that actions which could be 'justified' under very different models could be dishonest, or they could be plain old expected value maximization.) The added suggestion to think of an experiment to distinguish between hypotheses - indeed, the idea of doing anything at all - is a good addition.
Replies from: TropicalFruit↑ comment by TropicalFruit · 2021-11-24T08:01:15.334Z · LW(p) · GW(p)
I tried to read that link. I really did. I "read" like 10 paragraphs, and skimmed further down than that... but I gave up.
I'm interested in what you have to say. Mind providing a summary in... well punctuated, concise English?
Replies from: countingtoten↑ comment by countingtoten · 2021-11-24T22:14:54.057Z · LW(p) · GW(p)
I just summarized part of it, and was downvoted for doing so. Have you tried to correct that and encourage me numerically?