Zetetic explanation

post by Benquo · 2018-08-27T00:12:14.076Z · LW · GW · 138 comments

This is a link post for http://benjaminrosshoffman.com/zetetic-explanation/

Contents

  What is yeast? A worked example
    Seeds, energy storage, and coevolution
    Energy extraction
    Cultured food
    Yeast-specific products
  Additional thoughts on explanation
None
138 comments

There is a kind of explanation that I think ought to be a cornerstone of good pedagogy, and I don't have a good word for it. My first impulse is to call it a historical explanation, after the original, investigative sense of the term "history." But in the interests of avoiding nomenclature collision, I'm inclined to call it "zetetic explanation," after the Greek word for seeking, an explanation that embeds in itself an inquiry into the thing.

Often in "explaining" a thing, we simply tell people what words they ought to say about it, or how they ought to interface with it right now, or give them technical language for it without any connection to the ordinary means by which they navigate their lives. We can call these sorts of explanations nominal, functional, and formal.

In my high school chemistry courses, for instance, there was lots of "add X to Y and get Z" plus some formulas, and I learned how to manipulate the symbols in the formulas, but this bore no relation whatsoever to the sorts of skills used in time-travel or Robinson Crusoe stories. Overall I got the sense that chemicals were a sort of magical thing produced by a mysterious Scientific-Industrial priesthood in special temples called laboratories or factories, not things one might find outdoors.

It's only in the last year that I properly learned how one might get something as simple as copper or iron, reading David W. Anthony's The Horse, the Wheel, and Language and Vaclav Smil's Still the Iron Age, both of which contain clear and concrete summaries of the process. Richard Feynman's explanation of triboluminescence is a short example of a zetetic explanation in chemistry, and Paul Lockhart's A Mathematician's Lament bears strong similarities in the field of pure mathematics.

I'm going to work through a different example here, and then discuss this class of explanation more generally.

What is yeast? A worked example

Recently my mother noted that when, in science class, her teacher had explained how bread was made, it had been a revelation to her. I pointed out that while this explanation removed bread from the category of a pure product, to be purchased and consumed, it still placed it in the category of an industrial product requiring specialized, standardized inputs such as yeast. My mother observed that she didn't really know what yeast was, and I found myself explaining.

Seeds, energy storage, and coevolution

Many plants store energy in chemicals such as proteins and carbohydrates around their seeds, to help them start growing once they're in wet ground. Some animals seek out the seeds with the most extra energy, and poop the occasional seed elsewhere. Sometimes this helps the plant reproduce more than it otherwise would have; in such cases, the plant may coevolve with the animals that eat it, often investing much larger amounts of energy in or around the seed, since the most calorific seeds get eaten most eagerly.

Humans coevolved with a sort of grass. If you've seen wild grass, you may have observed stalks with seed pods on them, that look sort of like tiny heads of wheat. Grain is basically massively a grass that coevolved with us to produce plump, overnourished seeds.

Energy extraction

Of course, there's only so much we can do to select for digestibility. Often even plants that store a lot of surplus energy need further treatment before they're easy to digest. Some species evolved to specialize in digesting a certain sort of plant matter efficiently; for instance, ruminants such as cattle and sheep have multiple stomachs to break down the free energy in plant matter. Humans, with unspecialized omnivorous guts, learned other ways to extract energy from plants.

One such way is cooking. If you heat up the starches inside a kernel of wheat, they'll often transform into something easier to digest. But bread made this way can still be difficult to digest, as many eaters of matzah or hardtack have learned. Soaking or sprouting seeds also helps. And a third way to make grains more digestible is fermentation.

Cultured food

Where there's dense storage of energy, there's often leakage. Sometimes a seed gets split open for some reason, and there's a bit of digestible carbohydrate exposed on the surface. Where there's free energy like this, microbes evolve to eat it.

Some of these microbes, especially fungal ones, produce byproducts that are toxic to us. But others, such as some bacteria and yeasts, break down hard-to-digest parts of wheat into substances that are easier for us to digest. Presumably at some point, people noticed that if they wet some flour and left it out for a day or two before cooking it, the resulting porridge or cracker was both tastier and more digestible. (Other fermented products such as sauerkraut may have been discovered in a similar way.)

Of course, while grain-eating microbes will often tend to be found on grain, allowing for such accidental discoveries, there is no guarantee that they'll be the kind we like. Since they mostly just eat accidental discharges of energy, there also just aren't very many of them, compared to the amount of energy available to them once the flour is ground up and mixed with water. It takes a while for them to eat and reproduce enough to process the whole batch.

Eventually, people realized that if they took part of a good batch of dough or porridge and didn't cook it, but instead added it to the next batch, this would yield an edible product both more reliably (because the microbes in the starter would have a head start relative to any potentially harmful microbes) and more quickly (again, because they'd be starting with more microbes relative to the amount of grain they needed to process). This is what we call a sourdough "culture" or "starter".

(You can make a sourdough starter at home by mixing some flour, preferably wholemeal, with water, covering it, and adding some more flour and water each day until it gets bubbly. Supposedly, a regularly fed starter can stay active for generations.)

Breads are particularly convenient foods for a few reasons. First, grains have a very high maximum caloric yield per acre, allowing for high population density. Second, dry grains or flour can be stored for a long time without going bad; as a result, stockpiles can tide people over in lean seasons or years, and be traded over large distances. Third, a loaf of bread itself has some amount of more local portability and durability, relative to a porridge.

Yeast-specific products

One of the microbes found in a sourdough culture, yeast, has a particularly simple metabolism with two main byproducts. It pisses alcohol, and farts carbon dioxide. Carbon dioxide is a gas that can leaven or puff up dough, which makes it nicer to eat. Alcohol is a psychoactive drug, and some people likes how it makes them feel. Many food cultures ended up paying special attention to grain products that used one or the other of these traits: beer and leavened bread.

In the 19th century CE, people figured out how to isolate the yeast from the rest of the sourdough culture, which allowed for industrial, standardized production of beer and bread. If you know exactly how much yeast you're adding to the dough, you can standardize dough rising times and temperatures, allowing for mass production on a schedule, reducing potentially costly surprises.

The price of this innovation is twofold. First, when using standardized yeast to bake bread, we forgo the digestive and taste benefits of the other microbes you would find in a sourdough starter. Second, we become alienated from a crucial part of the production of bread, to the point where many people only relate to it as a recipe composed of products you can buy at a store, rather than something made of components you might find out in the wild or grow self-sufficiently.

Additional thoughts on explanation

I'm having some difficulty articulating exactly what seems distinct about this sort of explanation, but here's a preliminary attempt.

Zetetic explanations will tend to be interdisciplinary, as they will often cover a mixture of social and natural factors leading up to the isolation of the thing being explained. This naturally makes it harder to be an expert in everything one is talking about, and requires some minimal amount of courage on the part of the explainer, who may have to risk being wrong. But they're not merely interdisciplinary. You could separately talk about the use of yeast as a literary motif, the chemistry of the yeast cell, and the industrial use in bread, and still come nowhere close to giving people any real sense of why yeast came into the world or how we found it.

Zetetic explanations are empowering. First, the integration of concrete and model-based thinking is checkable on multiple levels - you can look up confirming or disconfirming facts, and you can also validate it against your personal experience or sense of plausibility, and validate the coherence and simplicity of the models used. Second, they affirm the basic competence of humans to explore our world. By centering the process of discovery rather than a finished product, such explanations invite the audience to participate in this process, and perhaps to surprise us with new discoveries.

Of course, it can be hard to know where to stop in such explanations, and it can also be hard to know where to start. This post could easily have been twice as long. Ideally, an explainer would attend to the reactions of their audience, and try to touch base with points of shared understanding. Such explanations also require patience on both sides. Another difficulty this approach raises is that plain-language explanations rooted in everyday concepts may not match the way things are referred to in technical or scientific literature, although this problem should not be hard to solve.

In some cases, one might want to forwards-chain from an interesting puzzle or other thing to play with, rather than backwards-chaining from a product. Lockhart seems to favor exploration over explanation for mathematics, and of course there's no particular reason why one can't use both. In particular, the explanation paradigm seems useful for deciding which explorations to propose.

Related: The Steampunk Aesthetic [LW · GW], Truly Part Of You [LW · GW]

138 comments

Comments sorted by top scores.

comment by Raemon · 2018-08-30T20:10:59.821Z · LW(p) · GW(p)

Two posts that feel relevant to this, including briefly for now:

Outside the Laboratory [LW · GW]

The Steampunk Aesthetic [LW · GW]

Replies from: Benquo
comment by Benquo · 2018-08-31T04:18:16.803Z · LW(p) · GW(p)

Thanks for pointing these out. I feel like the class of explanation I'm trying to point to is the narrative complement to some of what you were trying to point to in The Steampunk Aesthetic. I'll add a link to it.

comment by habryka (habryka4) · 2018-09-03T23:17:07.658Z · LW(p) · GW(p)

Promoted to curated: I think the question of "what makes a good explanation, and how do humans come to really understand things?" is one of the core questions of rationality. I think this post is a well-written and clear attempt at introducing some important considerations on what makes a good explanation, and I expect most readers to walk away with a slightly improved ability to give better explanations than they were before.

Importantly, in the broader idea-pipeline of LessWrong, I think the concept outlined in this post is still in a relatively early poetry phase, and I would be somewhat hesitant for it to be adopted widely. I think as we develop and analyze the ideas in the post further, I expect we will eventually get something more similar to Eliezer's "A technical explanation of a technical explanation", where we can be more precise and robust in specifying what makes a good explanation, instead of having to rely on vaguer metaphors and individual examples.

(I don't mean to say that this post says the same thing as Eliezer's technical explanation post. I think it primarily talks about different aspects, that are also important. I am only trying to say that Eliezer's technical explanation seems like a good target standard for rigor and robustness)

Replies from: Benquo
comment by Benquo · 2018-09-03T23:40:50.467Z · LW(p) · GW(p)

I agree on the limits of this post - I hope it's a beginning, not an end.

comment by Martin Sustrik (sustrik) · 2018-08-27T15:34:13.273Z · LW(p) · GW(p)

Programmers are often advised to write comments in the code about the intent, what they wanted the code to do, rather than about what the code does.

When you think about it, it makes sense. The code already does what it does, no need to write about that. However, what is the code supposed to do is often unclear, especially when the code is buggy.

This is kind of similar to the yeast example above. The rule is to explain why not how.

To give another example, I am trying to learn statistical mechanics. Not to memorize it but to actually grok it. And it turns out that staring at the equations doesn't help much. I am planning to look into its history to understand what kinds of problems were fathers of thermodynamics trying to solve (something to do with steam engines, I guess) to understand why that specific kind of thinking about the topic is useful.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-27T19:42:28.584Z · LW(p) · GW(p)

[P]rograms must be written for people to read, and only incidentally for machines to execute.

— Harold Abelson and Gerald Jay Sussman, Structure and Interpretation of Computer Programs


This quote is correct for many reasons, one of which is that all a computer has to do with a program is execute it; whereas it often falls to humans to modify it, because to us, humans, there exists the concept of “what this program should, ideally, do”. The reason (or, if you like, a reason—though the major one, I would say) why code ought to be clear and readable is in order that humans may be able to (a) evaluate it on the basis of how far the actual program is from what we’d like it to be, and (b) modify the program in order to bring it more into line with the ideal.

This, in turn, gives us a way to respond to the occasional claim that it is not, in fact, necessary that code be human-readable. Clearly, code should be human-readable if there will ever be a case when either (a) humans need to examine it by hand (as opposed to examining it with some automated tools), or (b) humans need to modify it. If this is simply not going to happen (e.g., Java bytecode), then readability is irrelevant.


And now we can apply a similar principle to the ideas in the OP. It is, indeed, important to understand the “how” and the “why” of certain things with which we may, personally, as amateurs (as opposed to domain experts working in the context of commercial/industrial processes), productively interact.

But not otherwise. Someone with whom I was discussing this brought up the following example: suppose you watch a fascinating and enlightening video that tells you all about how jelly beans are manufactured. Having watched it, you can now… produce jelly beans in batches of hundreds of thousands at a time, with the aid of a fully-staffed manufacturing facility, a global supply chain, and industrial-scale equipment that costs millions of dollars?

This is useless knowledge. It is entertainment, nothing more. At worst, it may be insight porn, deceiving us into the sense that we’ve gained some practical understanding of the world, while in fact teaching us nothing that we can actually use.

comment by mako yass (MakoYass) · 2018-09-11T07:59:28.195Z · LW(p) · GW(p)

Stories were probably the first information format

Imagine a time before language. The information you get from your environment comes as series of events happening over time. That's the kind of information you're good at integrating into your active knowledge. Now, our blind idiot creator bestows us with language, what kind of information structure is going to allow us to convey information to our conspecifics in a way that they'll be able to digest and internalize? Just the same, a description of a series of events spoken over time, which they may now experience as if those events were happening again in front of them.

And this kind of information is very easy for us to produce. We don't need to be able to assemble any complex argument structures, we just need to dump words relating to the nouns and verbs we saw, in the order that they occurred. Stir in an instinct to dump episodic memories in front of people who weren't present in those memories, and there, they will listen, and they'll get a lot out of it, and now we have the first spoken sentences.

With this in light, if it turns out storytelling was not the first kind of extended speech, I will be shocked.

The story of bread is not the most succinct way to encode the information about bread that a person most needs, an idea is only useful if it will help a person to anticipate futures of the things that matter to them in consequence of their available actions. Our past is not our future, on its own, we can't affect the past, and a chunk of the past will not always tell us much about the future. However, a story, a relaying of events from the past, is extremely digestible. There is no way to arrange information that an animal would find easier to make sense of.

If you can find a way to explain what happened, in chronological order, that lead the ingredients of bread to become abundant, and then that made it easy for us to make bread, and then ensured that we would be able to digest it, there you've explained why and how bread is important in the form of a story, it will be not just useful information, it will be very easy for us to integrate.

And that, it seems, is what a zetetic explanation does? This... explaining by selecting parts of the history that can be assembled into a complete proof of the thing's importance.. I think it does deserve a name.

comment by Trevor Hill-Hand (Jadael) · 2018-08-27T02:35:29.663Z · LW(p) · GW(p)

This post helped me notice a difference I've felt between satisfying and unsatisfying explanations; why Feynman explaining something feels different from Wikipedia explaining something. I love it.

comment by Douglas_Knight · 2018-08-30T18:22:15.991Z · LW(p) · GW(p)

There were two details that you left out that bothered me. At first I felt like I was nitpicking, but then they two coalesced and I felt better describing them.

You say that animals have coevolved with plants, but you I think you should have spelled this out more. You say that the plant puts more energy around the seed, but you don't say that this is a fruit. The point of a fruit is not to be higher energy to than a seed, just so that it is more likely to be eaten (Are there any examples of this, outside of agriculture?). The point of a fruit is to separate out the fruit which is to be digested by the animal from the seed which the plant does not want to be digested. Fruits are wet sugar, the easiest thing to digest. An animal that eats a seed is competing for the same energy as the plant, whereas an animal that specializes in eating fruit may not be very efficient at digesting the inner seed. This isn't relevant to the coevolution of wheat, which benefits from humans planting seed corn, not from humans failing to digest the wheat.

You mention two methods of making bread without industrial yeast. One is to just leave out porridge, harvesting yeast from the air or the wheat. Another is to get starter from your neighbor. But I think that the most common method, at least historically, is to put fruit in the porridge. Since fruit is easier to digest than seeds, yeast is more common on fruit than on seeds.

Replies from: Benquo, vedrfolnir, ryan_b
comment by Benquo · 2018-08-31T04:16:28.226Z · LW(p) · GW(p)

The second point seems like an important omission if true. Not having known that originally, I notice that based on the model in this post, it seems like the sort of thing that could likely be true. I don't think I explicitly mentioned the neighbor method either, though I think it's another reasonable inference from what I did say.

On your second point, it seems like while fruits often store food packages outside the seeds, grains grow a bunch of similar modules with uncertainty about whether they'll be used as the reproductive payload or the calorie surplus that persuades the symbiote to spread the reproductive payload. My guess would be that before explicit agriculture, some grasses did well around humans because there would be the occasional undigested or otherwise scattered seed by accident.

Overall it seems like you're pointing at something important on the object level here, and I appreciate the engagement with the *kind* of explanation I was trying to give.

comment by vedrfolnir · 2018-09-12T04:26:09.629Z · LW(p) · GW(p)

I'm not a biologist, but I think it would be pretty difficult to tell whether fruits are intended to encourage animals to eat them or to protect the inner seed. But the energy in an avocado is primarily stored as fats, and it's generally thought that they were eaten by now-extinct Central American megafauna. (And it's common to stick avocado seeds with toothpicks to get them to sprout...)

There's also the chili pepper, but I don't know if anyone's studied digestion of pepper seeds in birds (which aren't sensitive to capsaicin) vs. mammals (which are). It may be that chili peppers evolved to deter mammalian but not avian consumption because the mammalian digestive tract is more likely to digest the seeds, rather than (as the common explanation has it) because birds disperse the seeds more widely.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2018-09-12T18:52:21.590Z · LW(p) · GW(p)

For chili peppers, I, too, prefer the second explanation. I think that is the more popular one, eg, appearing in wikipedia. More specific than digestion, is the theory that it is to avoid the grinding teeth of mammals. I don't know if the specific case has been studied, but the general topic of how much various fruit-eaters digest seeds has been studied. Presumably there is study of how to select cooperative fruit-eaters over defective fruit-eaters.

I am confused by your first sentence. What are the alternative hypotheses? Protect the seed from what? Fruit are certainly lousy at protecting the seed from yeast. I claim that they protect the seed from specialized seed-eaters by encouraging consumption by specialized fruit-eaters. Yes, the avocado is a pretty weird fruit, but it's still a soft, wet, easily digestible outer coating around a hard, difficult to digest seed. What light does it shine on the question? Your use of the word "but" suggests that it addresses the first question, but I don't see it, perhaps because I don't know what the first question is.

comment by ryan_b · 2018-09-04T21:53:34.229Z · LW(p) · GW(p)

I find the adding fruit method interesting, as I had not heard of it before. I had understood the exposure-to-air method to be both the earliest and the most common, which matches my expectation as all the environments people are in have naturally occurring yeast, but not all of them have fruit to add. For example, traditional sourdough explicitly has just water, flour, and salt.

I'm pretty sure the methods of bread making at least in Morocco and Iraq don't involve adding fruit, which I sort of mentally extend across the Arab-speaking world. Because of its similarity I assume the same of naan. Interestingly Iraq (at least the Baghdad area where I have been) is an easy-access fruit environment courtesy of citrus trees.

On the flip side, I have seen recipes for different sourdoughs that involve adding grapefruit juice to accelerate the process, but in the context I saw it was just for speeding things up. There is also the habit of adding various leftovers, including fruits, nuts, and vegetables to bread, which would probably have a similar effect.

comment by Leafcraft · 2018-09-04T09:56:34.749Z · LW(p) · GW(p)

Thanks for the great reading, I wonder if someone would be interested in writing a zetetic description of a very complex subject, as an exercise of course, to see if such a thing is even possible for very complex subjects or how effective it is. I'm new to the site so sorry if such a request is off topic.

Replies from: Benquo
comment by Benquo · 2018-09-04T14:23:10.727Z · LW(p) · GW(p)

You could try.

comment by SilentCal · 2018-09-05T19:32:32.371Z · LW(p) · GW(p)

So the big question here is, why are zetetic explanations good? Why do we need or want them when civilization will happily supply us with finished bread, or industrial yeast, or rote instructions for how to make sourdough from scratch? The paragraph beginning "Zetetic explanations are empowering" starts to answer, but a little bit vaguely for my tastes. Here's my list of possible answers:

1) Subjective reasons. They're fun or aesthetically pleasing. This feels like a throwaway reason, and doesn't get listed explicitly in the OP unless 'empowering' unpacks to 'subjectively pleasing', but I wouldn't throw it away so fast--if enough people find them fun, that alone could justify a campaign to put more zetetic explanations in the world.

2) They let you test what you're told. This is one of the reasons given in OP. Unfortunately, not every subject is amenable to zetetic explanation, and as long as I have to make up my mind about lots of science without zetetic understanding, I don't see zetetic explanation being an important part of my fake science filter.

3) They let you discover new things, whereas following rote instructions will only let you do what's been done before. This is true, but I think it usually takes a large base of zetetic understanding to do new useful things. If I tried to create new fermented foods based solely on having read this post, I probably wouldn't achieve anything useful. But if I did want to create novel fermented foods, I'd want to load up on lots more zetetic knowledge.

4) General increased wisdom? Maybe a zetetic understanding of bread ripples through your knowledge, leading you to a slightly better understanding of biology, the process of innovation, nutrition, and a variety of related fields, and if you keep amassing zetetic understandings of things it'll add up and you'll be smarter about everything. It's a nice story, but I'm not convinced it's true.

Replies from: Raemon
comment by Raemon · 2018-09-05T19:59:00.243Z · LW(p) · GW(p)

I think your list is roughly correct. But, put another way that feels oriented better to me:

It might or might not be that zetetic explanations are good. But what are the problems that Benquo is trying to solve here and how can we tell if they got solved?

  • People often learn bits of knowledge as isolated facts that that don't fit together into a cohesive world-model. This is a problem when:
    • people are confronted with problems that they have the knowledge to solve, but aren't aware that they do
    • people are confronted with situations they don't even realize are problems, or worth considering as problems, because they were so disconnected from how their world fits together that they didn't see it as gears.
    • a stronger claim may be that there exists a longterm, high level payoff for having a highly developed ability to integrate knowledge. (Partly because you have a whole lot of accumulated knowledge that fits together usefully, but moreover, because you have the ability to reflexively form theories and test them and use them effectively, which is built out of several subskills. (See Sunset at Noon [LW · GW] middle sections for my take on that)

So the hypothesis here is that:

  • Most people's pedagogy has room for improvement, in the domain of helping people to connect facts into an integrated world-model, and to build the skill of doing so.
  • Explanations that include cross domains, historical content, and connecting a concept to anchors that a person can clearly see and understand are a good way to improve pedagogy in this way
  • I'd perhaps add that that style of pedagogy may be good for the teacher as well as the student.

comment by Ben Pace (Benito) · 2018-08-31T04:51:01.855Z · LW(p) · GW(p)

I'm reading the largely lucid explanation of yeast, but here's the main bits where I got stuck:

Where there's dense storage of energy, there's often leakage. Sometimes a seed gets split open for some reason, and there's a bit of digestible carbohydrate exposed on the surface. Where there's free energy like this, microbes evolve to eat it.
Some of these microbes, especially fungal ones, produce byproducts that are toxic to us. But others, such as some bacteria and yeasts, break down hard-to-digest parts of wheat into substances that are easier for us to digest.

The first paragraph I get; in my world-model, there's a generic evolutionary pressure that optimises for this kind of thing at different levels - here it's microbes coming to eat the carbohydrates. And then some of them produce toxic by-products, which makes sense, I imagine most chemicals aren't that great for humans to eat.

But then some of them... why do they break down the 'hard-to-digest parts'? And why does this follow from a section about microbes eating carbs, which I presume (not really knowing quite what carbs are except that humans are supposed to eat them) are not 'hard-to-digest' parts of wheat?

I suppose I have a model where there are hard to digest parts, and easy to digest parts, and you're saying that if the hard to digest parts get broken then microbes come for the easy to digest parts. But apparently some microbes come for the hard-to-digest parts too, and turn them into easy-to-digest-for-humans parts.

That makes sense, but then I'm confused about why you wrote that after saying that microbes come to eat the juicy insides *after* the outside tough parts are removed.

---

Then, the next bit reads:

Presumably at some point, people noticed that if they wet some flour and left it out for a day or two before cooking it, the resulting porridge or cracker was both tastier and more digestible. (Other fermented products such as sauerkraut may have been discovered in a similar way.)

Is there an implicit statement here that water broke down the hard-to-digest outside parts and brought the yeast molecules in? I'm not sure I know much about fermentation or what it is to follow that bit.

---

I don't know really what carbohydrates or fermentation are, and that maybe prevented me from reading clearly.

Replies from: Benquo
comment by Benquo · 2018-08-31T16:43:38.629Z · LW(p) · GW(p)
why do they break down the 'hard-to-digest parts'?

The only explanation I have to offer here is a selection effect. Mostly when something is food to us, other creatures compete with us for the food and we want to ward them off. Occasionally we find something that transforms nonfood to food, and encourage it to grow. Crops are one example. Ruminants are another. The microbes that grow on grain are another.

Is there an implicit statement here that water broke down the hard-to-digest outside parts and brought the yeast molecules in?

It's the breaking up of the wheat kernel in grinding flour that makes more of the energy available (vs the occasional leakage you might expect to happen without human intervention), by opening up the capsules it's in. But water is also needed for metabolism, so until you wet the flour the naturally occurring grain-eating microbes can't take much advantage of this.

comment by Sniffnoy · 2018-09-05T18:43:47.593Z · LW(p) · GW(p)

So basically, historical explanations. These are frequently a good idea for exactly the reason you say -- a lot of things are just a lot more confusing without their historical context; they developed as the answer to a series of questions and answers and things make more sense once you know that series.

However it's worth noting that there are times where you do want to skip over a bunch of the history, because the modern way of thinking about things is so much cleaner, and you can develop a different, better series of questions and answers than the one that actually happened historically.

Replies from: Benquo
comment by Benquo · 2018-09-05T19:11:57.740Z · LW(p) · GW(p)

Here's why I think the distinction you're drawing can be misleading:

Some "historical" explanations lay out a path to discovering a thing that clarifies the evidence we have about it and what other ways that evidence should constrain our expectations. Other "historical" explanations recite the successive chronology of opinions about the thing, often with a progress narrative.

Some modernized explanations go through a better-than-chronological series of questions and answers that lead you more efficiently to understanding the thing. Others teach you how to describe the thing in contemporary technical jargon.

For both the chronological and modernized approach, the first version is zetetic, the second version isn't.

Replies from: Sniffnoy
comment by Sniffnoy · 2018-09-06T03:15:06.205Z · LW(p) · GW(p)

Thanks, that's a good way of putting it.

comment by Said Achmiz (SaidAchmiz) · 2018-08-27T03:56:20.554Z · LW(p) · GW(p)

Often in “explaining” a thing, we simply tell people what words they ought to say about it, or how they ought to interface with it right now, or give them technical language for it without any connection to the ordinary means by which they navigate their lives. We can call these sorts of explanations nominal, functional, and formal.

In my high school chemistry courses, for instance, there was lots of “add X to Y and get Z” plus some formulas, and I learned how to manipulate the symbols in the formulas, but this bore no relation whatsoever to the sorts of skills used in time-travel or Robinson Crusoe stories.

Hmm…

the ordinary means by which they navigate their lives

 

time-travel or Robinson Crusoe stories

Hm.

Replies from: Bastian Sommerfeld, Benito, Benquo
comment by Bastian Sommerfeld · 2018-08-30T06:46:38.693Z · LW(p) · GW(p)

Try steelmanning Benquo's idea and not picking apart his bad example of it. That we have now established well.

What sort of explanation could his description encompass? I had to think of Eliezer's Free Will posts. Which built knowledge from the ground up starting at the roots, going through all the interferential steps up to his final solution to the question.

The Style of explanation used by him empowers me: I can read it jump out when I realize that my reasoing hasn't come to the end result yet, think for myself and try to get it myself. Furthermore it gives me the tools and hooks to disassemble his whole reasoning. If I, in that fully represented journey of reason, find an error, I could hypothetically take it down.

This would be my best interpretation of what Benquo might have had in mind, when he asked to explore this vague feeling/ idea about a concept.

Said, you seem resourcefull, can you help identify what 'Zeletic' might mean? Apart from that: I'm sure someone has already identified the concept and named it. Maybe we can find that?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-30T16:41:56.638Z · LW(p) · GW(p)

What sort of explanation could his description encompass? I had to think of Eliezer’s Free Will posts. Which built knowledge from the ground up starting at the roots, going through all the interferential steps up to his final solution to the question.

It is, perhaps, not a coincidence that I consider the Free Will sequence to be one of the few parts of the Sequences which is quite unconvincing, and in fact rather confused.

The Style of explanation used by him empowers me: I can read it jump out when I realize that my reasoing hasn’t come to the end result yet, think for myself and try to get it myself. Furthermore it gives me the tools and hooks to disassemble his whole reasoning. If I, in that fully represented journey of reason, find an error, I could hypothetically take it down.

Meaning no disrespect, but I am having considerable trouble parsing this paragraph. Perhaps you could rewrite it, or someone else might interpret it for me?

Replies from: Bastian Sommerfeld
comment by Bastian Sommerfeld · 2018-08-31T06:51:02.165Z · LW(p) · GW(p)

I do understand that his Free Will posts may come off confused. I'd even go so far and say they are! Purposefully so. Let me explain why by rephrasing as per your request:

If I imagine reasoning like a staircase, where each step is one step I have to overcome. I think about a problem and reach a conclusion, which seems to be satisfying. Then I realize, no it isn't. I have to take another step towards full understanding of the problem, I have to reason further, there is more to find.

When someone gives me the top of those stairs, I'll be incredulous as to how anyone might've gotten there: Priests in Temples casting magic spells to produce Yeast.

However, imagine getting the whole stair case in form of an explanaition. You'll be able to start at the lowest step and work your way up. Using the explanation You've gotten as a handrail to aid you, while all the time examining each and every step for cracks - or junctions others missed.

In my opinion accounting for missteps and pitfalls which are easily fallen into in a chain of reasoning are as much steps in that staircase as all the right steps. Scientific Philosophy, or something - the mistakes made are as much learning material as the right steps.

If you want to include all that in an explanation that, of course, neccessitates giving a confused explanation, on Eliezers part. The Free Will sequences are meant for aspiring rationalists honing their tools in a first task. Instead of simply giving the result of his reasoning - the 'top step' - and leave them to their devices, he's gone through the trouble of giving iconic steps of the staricase leading up to his result.

comment by Ben Pace (Benito) · 2018-08-30T19:57:44.350Z · LW(p) · GW(p)

[Mod Note] I feel uneasy about the dynamics of this thread. Will think a bit more before writing out exactly what seems off, and I will add a more substantial comment either later today or tomorrow, but just wanted to register that I'm thinking about it.

Added: Pardon me. Will finish writing something substantive here tomorrow.

Replies from: Benito
comment by Ben Pace (Benito) · 2018-09-04T04:41:55.148Z · LW(p) · GW(p)

@Said. I’ve been thinking a bit about this comment thread, going back to read some comments of yours about moderation, and trying to pass your general ITT regarding commenting norms. Here’s my current best guess about what seems important to you in this domain:

  • Our global intellectual community suffers from low standards
    • Many parts of science are seeing a catastrophic replication crisis - even neuroscience [LW · GW].
    • Facebook and Twitter are shining examples of what being overwhelmed with low-quality content looks like.
  • Our specific intellectual community (LessWrong) suffers from low standards
    • The process that elevates posts and idea is hardly reassuring. Lots of people upvote it, then maybe it gets curated, and then that’s it. No formal and rigorous checking or feedback, no outside reviewers, nothing. There’s a few comments, but nobody is being explicitly incentivised to find good counter-arguments.
  • The correct action here is to significantly increase our standards.
    • This will cause many people to not write most of the content they’re writing. Sure, this might be most of the content, but one man’s modus ponens is another’s modus tollens - the current content is just bad. There is an awful lot out there, and we need to refine it, not add to it.

The situation we are in is not one of slightly raising standards that are generally already pretty good, but running crisis-mitigation / triage on the horrendous state of the current internet and LessWrong. If someone writes a post that is not up to a good standard, this needs to be made apparent to them, for two reasons.

Firstly, because it damages the commons; they’re clogging up our collective intellectual space with wrong (often trivially wrong) points. If this is not made apparent in the comments, then it would be better if the post was not written at all. Immediately commenting to point out mistakes is the correct response, the person needs to learn that this is not to be tolerated. That way leads to madness, or worse, Tumblr.

Sure, they may try to reply to you, to argue their point further, you may even end up understanding them better, but it was still their fault to make the post wrong in the first place, not your fault for misunderstanding their writing or being highly critical of their basic errors.

And secondly, because criticising people’s ideas is the only way for them to improve. LessWrong is a place we actually care about being good, where people can come and practice the art of rationality. Practise means getting feedback, and coddling people with low standards will mean they will not be able to find their actually good ideas. And this is after all what’s most important - that we figure out true and important ideas.

---

I take the following quotes of yours as implying this interpretation.

One [LW(p) · GW(p)]:

I do not write top-level posts because my standards for ideas that are sufficiently important, valuable, novel, etc., to justify contributing to the flood of words that is the blogosphere, are fairly high. I would be most gratified to see more people follow my example.

Two: [LW(p) · GW(p)]

It is good if underdeveloped ideas can be raised. It is good if they can be criticized. It is good if that criticism is not punished. It is good if the author of the underdeveloped idea responds either with a spirited defense or with “yeah, you’re right, that was a bad idea for the reasons you say—thanks, this was useful!”. This is what we should incentivize. This is how intellectual progress will take place.
Or, to put it another way: criticism of a bad idea does not constitute punishment for putting that idea forth—unless, of course, being criticized inherently causes one to lose face. But why should that be so? There’s only one real reason why, and it reflects quite poorly on a social environment if that reason obtains… Here, on Less Wrong, being criticized should be ok. Responding to criticism should be ok. Argument should be ok.
Otherwise you will get an echo chamber—and if instead of one echo chamber you have multiple ones, each with their own idiosyncratic echoes… well, I simply don’t see how that’s an improvement. Either way the site will have failed in its goal.

Three [LW(p) · GW(p)]:

Without a “culture of unfettered criticism”, as you say, these very authors’ writings will go un-criticized, their claims will not be challenged, and the quality of their ideas will decline...
(This is, of course, not to mention the more obvious harms—the spread of bad ideas through our community consensus being only the most obvious of those.)

Four: [LW(p) · GW(p)]

...in the absence of open and lively criticism, bad ideas proliferate, echo chambers are built, and discussion degenerates into streams of sheer nonsense.

I also think this explains my perception (more on this below) that many comments of yours ask for the author to do a lot of effort while doing very little yourself. Responses like this-

I assumed you meant what you wrote. It does not seem mysterious or confusing, just contradictory. (If you meant something other than what you wrote, well, I guess you’ll want to clarify).

-where it feels (to me) like it is on the other person to write well, not on you to expend effort to interpret them. They’re the one damaging the commons & who needs to improve.

---

So, to start with, I agree with my-model-of-you about the Standards Problem. There are incredibly few places in this world I can go to where I expect everyone to keep a high standard of evidence - certainly not any online platforms that I could name, nor most scientific journals. In person I have a few friends that I trust, and sending them google docs works well, but it’s clear that we need something that can coordinate intellectual progress in fields with 10s and 100s of people, not just groups of 3 or 4.

And it’s high in my priorities to get LessWrong to have a process for actually checking ideas, to which I can contribute a high effort post (like my own post on common knowledge) - where I can get good feedback that both I and the community trusts to actually find the good counter-arguments. This involves both incentivising people to find good counter-arguments, and also incentivising people to write rigorous posts (even if they are not the generators. I would love for someone to attempt to submit a technical explanation of the core ideas in Zvi’s Slack and the Sabbath sequence, for example. I think Eliezer managed to do something similar with his post “Moloch’s Toolbox”, adding rigour to Scott Alexander’s initial poetic post, and it’s sad that there’s no trusted process in the world for checking that and making it common knowledge in a larger community like this one).

But we’re not there yet, and (I think) I disagree with you about how to get to there. I think that the correct move at the minute is not for further negative incentive, but for a stronger positive incentive for good writing. I think the dream of “Keeping everything the same but removing all of the bad ideas” is likely a fiction. People need to be able to honestly put forward new and unrigorous ideas without expecting the Spanish Inquisition[1], to be able to find the one or two gems that can be elevated and canonised.

Right now my approach is to encourage people to try, and encourage them more when they get something very right. Respectively, upvotes and curation. In time, we’ll add more steps to the process, and clear places for evaluation and criticism. That's why we've been working on the AI Alignment Forum and EA Forum 2.0 (two other basic platforms to later build upon), as well as thinking a lot about peer review [LW · GW] and what additional infrastructure on the site will set up these pipelines for ideas to go through.

Oliver has previously said [LW(p) · GW(p)] that the approach you’ve been taking was the approach that lead to a number of our top authors feeling unwelcome to post on the old LessWrong:

Eliezer, Scott G., Nate and a lot of the other top writers we’ve talked to (or who commented about the LessWrong culture somewhere publicly) have reported that LessWrong is a place that feels too hostile to post to, because of attitudes like the one you describe in this comment. Almost every major author we've interviewed has explicitly asked for some way to create content on LessWrong that is lower stakes and that allows for an explorative discussion instead of everyone just focusing on tearing apart their ideas. There has to be a place and a stage for exposing your idea to intense scrutiny, but we also need a place for explorative discussion and I am not happy about you trying to enforce a frame of intense scrutiny on every single post.

Your commenting style still has many of the properties that it did then. Let me be specific about that pattern that I’m talking about. In this thread with Benquo, this is what it felt like from my perspective:

You: These quotes from your post make no sense when juxtaposed.
Benquo: Can you do a bit of interpretive labour toward me?
You: No. It’s obvious that your quotes make no sense.
Benquo: Let me rephrase what I meant.
You: Still wrong. You’re not getting it? Notice this problem between these two quotes?
Benquo: I’m still not getting your point.
You: <Long and fairly interesting comment of your perspective on both the object level (making bread) and the meta level (what qualifies as a good explanation)>
Benquo: I want to push back on three assumptions you’ve made.
You: Another substantial comment. Also an extended snarky comment.
Benquo: It seems like you're trying to misunderstand here, and being sarcastic about it, and I'm not going to engage further.

The long, substantive point was quite interesting. But the opening three comments really didn’t help Benquo, they felt to me snarky/unnecessarily aggressive, and it seemed to me you were asking Benquo to do a lot work that you weren’t willing to do (until after you'd written the three comments implying Benquo was obviously getting something wrong). I believe comments like these make many writers feel like LessWrong is a crueller place - like the LessWrong that they previously fled.

So from here on out, I, along with the rest of the mod team, do plan to treat all the comments of yours that put in low interpretive effort on your part - ones that feel like you’re requesting a large amount of effort from someone else, whilst doing no signalling that you intend to reciprocate - as bad for the health of the culture on LessWrong, and strong-downvote them accordingly, with no exceptions.

(This is a minority of your comments; I don’t expect this to significantly stem your ability to comment on the site, as the majority of your comments are much more substantive - there’s at least one in this very thread that I strongly-upvoted.)

I do want to be transparent Said, that if almost anyone else was writing comments that I felt were this damaging to the culture, I would've come down hard on them long ago (with suspensions and eventually a ban). I don't intend to ban you any time soon, because I really value your place in this community - you're one of the few people to build useful community infrastructure like ReadTheSeqeunces.com and the UI of GreaterWrong.com, and that's been one of the most salient facts to me throughout all of my thinking on this matter. But after spending a great deal of time and effort worrying about the effects of your comments on the culture, I don't intend to put in as much time and effort if this comes up again in the future (be it 2 months or 12), and will just use the moderation tools as seems appropriate to me.

---

[1] I just want to flag this point about what good environments for exploring ideas are like, as I think my model of you strongly disagrees with it (and thus all the points that follow from it). I'd be happy to discuss it further if so - though I do commit to spending no more than 2 hours thinking about responses on this comment thread including reading time (-and I will time myself).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-04T06:39:37.969Z · LW(p) · GW(p)

But we’re not there yet, and (I think) I disagree with you about how to get to there. I think that the correct move at the minute is not for further negative incentive, but for a stronger positive incentive for good writing. I think the dream of “Keeping everything the same but removing all of the bad ideas” is likely a fiction. People need to be able to honestly put forward new and unrigorous ideas without expecting the Spanish Inquisition[1], to be able to find the one or two gems that can be elevated and canonised.

Right now my approach is to encourage people to try, and encourage them more when they get something very right. Respectively, upvotes and curation. In time, we’ll add more steps to the process, and clear places for evaluation and criticism. That’s why we’ve been working on the AI Alignment Forum and EA Forum 2.0 (two other basic platforms to later build upon), as well as thinking a lot about peer review and what additional infrastructure on the site will set up these pipelines for ideas to go through.

Why not something like:

  1. Everything is posted to people’s personal blogs, never directly to the front page. While something’s on a personal blog, “brainstorming session” rules apply: no criticism (especially no harsh criticism), just riffing / elaboration / maybe some gentle constructive criticism (and that, perhaps, only if asked).
  2. After this, an author can edit their post, or maybe post a new, better version; or maybe they can “workshop” it elsewhere, and then post an already-better version on LW immediately. In any case, a post that has either undergone this “gentle” discussion, or doesn’t need it, may be transferred to the front page. This may happen in one of three ways:
    • The author requests a frontpage transfer. It must be approved by a mod.
    • A mod suggests a frontpage transfer. It must be approved by the author.
    • Another user (perhaps, only those with some minimum karma value) suggests a frontpage transfer. It must be approved by a mod and also by the author.
  3. Once on the frontpage, the post is exposed to the full scrutiny of the LW commentariat. Personal insults, gratuitous rudeness, and the like are still not tolerated, of course; but otherwise, the author’s feelings aren’t spared. People say what they think about the post. Spirited discussion is had. The author may defend the post, or not; in any case, it’s full “Spanish Inquisition” mode.
  4. Repeat steps 2 and 3 until the post is generally agreed to be solid, not nonsensical, worthwhile, etc. (If this never happens, so be it. Some—indeed, many—ideas ought to be firmly, unsentimentally, explicitly, and publicly rejected.)
  5. A post which survives this scrutiny and emerges as a generally-agreed-to-be-excellent gem, may then be nominated for curation, and—if approved for curation—enters into a corpus of such of the community’s output that we may proudly exhibit as genuine intellectual accomplishment, and refer to in years to come; the building blocks of a rock-solid epistemic edifice.

I believe this would satisfy both your desiderata and mine.

Replies from: Benquo, Vaniver, Benito
comment by Benquo · 2018-09-04T14:29:40.681Z · LW(p) · GW(p)

This seems like it's solving the wrong problem. The problem with your comments isn't that they are too critical or apply too high an epistemic standard; it is that you have been insulting, sarcastic, and unwilling to make clear, specific claims about what the piece was getting wrong, instead doing things like insinuating that I'm not worth listening to because I haven't proved that I know about soda bread, and exaggerating my claims and then asking me to prove the exaggerated, false version.

(It seems like I'm strongly disagreeing with Ben Pace here, not just you.)

I would have actually been pretty happy to engage with a comment along the lines of "it seems like you're making claim X, which contradicts claim Y." That would have made it easy for me to respond along the lines of "Rather than X, I actually meant to make claim X' which doesn't contradict Y." Likewise with respect to the exaggerations - if you'd made your understanding of my claims explicit, then I have some hope of correcting the misunderstanding. But if I have to guess what your interpretation is, I'm signed up for infinite amounts of interpretive labor. In general it seems like a bad policy to force people to guess what your criticism is.

Replies from: habryka4
comment by habryka (habryka4) · 2018-09-04T18:19:44.774Z · LW(p) · GW(p)

In my model, this is indeed a large part of the problem. I like the idea behind Said's proposal, and do think that it would reduce some of the incentives towards aggressiveness, but I still think that even under the proposal, the exchange on this post would have not been a good fit for LessWrong. I.e. this section from Ben Pace's comment above still stands:

The long, substantive point was quite interesting. But the opening three comments really didn’t help Benquo, they felt to me snarky/unnecessarily aggressive, and it seemed to me you were asking Benquo to do a lot work that you weren’t willing to do (until after you'd written the three comments implying Benquo was obviously getting something wrong). I believe comments like these make many writers feel like LessWrong is a crueller place - like the LessWrong that they previously fled.
Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-04T20:04:18.520Z · LW(p) · GW(p)

There are two things to say here, I think.

First: ideas are a dime a dozen. Coming up with abstract conceptual constructs, “fake frameworks”, clever explanations, clever schemes, clever systems, interesting mappings, cute analogies, etc., etc., is the kind of thing that the kind of person who posts on Less Wrong (and I include myself in this set) does reflexively, while daydreaming in a boring lecture, while taking a shower, while cooking. It is easy.

And if you’re having trouble brainstorming, if no cool new ideas come to you? Browse the web for a while; among the many billions of unique web pages out there, there is no shortage of ideas. There are more ideas than we can consider in a lifetime.

The problem is in finding the good ideas—which means the true and useful ones; developing those ideas; verifying their truth and their usefulness. And that means you have to incentivize scrutiny, you have to incentivize people to notice problems, to notice inconsistencies, to do reversal tests, to consider the relevance of domain knowledge, to step back from the oh-so-clever abstract conceptual construct and apply common sense, and above all to say something instead of just thinking “hmm… ehhh… meh”, mentally shrugging, and closing the browser tab.

So when you say that I was asking Benquo to do a lot of work that I wasn’t willing to do, I am not quite sure how to respond… I mean… yes? Of course I was? It’s precisely the responsibility of the author, of the proposer of an idea, to do that work! And what do you think is easier, for me or for any other commenter? To post a short, “snarky” comment, or to post nothing at all? If the rule you enforce is “every criticism an effortpost”, then what you incentivize is silence.

It is very easy to create an echo chamber, merely by setting a high bar for any criticisms.

Your view seems to be: “The author has done us a service by not only having an idea, which itself is admirable, but by posting that idea here! He has given us this gift, and we must repay him by not criticizing that idea unless we’ve put in at least as much effort into the criticism as the author put into writing the post.”

As I say above, that is not my view.


Second: Ben (Pace) says (and you quote) that “the opening three comments really didn’t help Benquo”. Well, perhaps. I can’t speak to that. But why focus on this? That is, why focus on whether my comments did or did not help Benquo?

If we were having a private, one-on-one conversation, that sort of scolding observation might be apropos. But Less Wrong is a public forum! Ought I concern myself only with whether my comments on a post help the author of the post? But if that was my only concern, I simply wouldn’t’ve posted. With all due respect to Benquo, I don’t know him personally; I have no particular reason to want to help him (nor, of course, have I any reason to harm him; I have, in fact, no particular reason to concern myself with his affairs one way or the other). If my comments were motivated merely by whether they helped the author of the post or comment to which I was directly responding, then the overwhelming majority of what I’ve ever said on Less Wrong would never have been posted.

The question, I think, is whether my comments helped anyone (and, if so, who, and how, and how many). And I can’t speak to that either.[1] But what I can say for sure is that similar comments, made by other people in analogous situations in the past, have helped me, many times; and I have observed that similar comments (mine and others’) have done great good, quite a few times in the past.

How might such “low-effort”[2] comments help? In several ways:

  1. By pointing out something that others had not noticed (or similarly, by implying a perspective on the matter other than that from which people were viewing it before).
  2. Similarly to #1, by reminding others of some relevant concern or concept of which they were aware but had forgotten, or had not thought to consider in this context, etc.
  3. By creating common knowledge of some flaw or concern or similar, which many people were thinking of, but which none of them could be sure that anyone else also thought.
  4. By alluding to some shared or collective knowledge or understanding, thereby making an extended point concisely.
  5. By “breaking the spell” of a perceived tacit agreement not to point out something, not to criticize something, not to bring up a certain topic, etc.

Less Wrong, again, is a public forum. The point is for us to collectively seek truth and build useful things. When I comment, I consider whether my comment helps the collective with those goals. Whether it specifically helps the author of whatever I’m responding to, seems to me to be of secondary importance; and what’s more, taking that goal to instead be my primary goal when commenting, would drastically reduce the general usefulness of my comments (and in practice, of course, it would not even do that, but would instead drastically reduce their frequency).

[1] Well, some people told me that they liked my comments. But maybe they were just saying that out of politeness, or because they wanted to ingratiate themselves with me, or for god knows what other reason(s).

[2] But be careful of dismissing merely concise comments as “low-effort”. Recall the old joke about the repairman who sent a client an itemized bill for hitting an expensive device once with a hammer, and thereby making it work again: “Hitting it: $1. Knowing where to hit it: $10,000.” Similarly, making a one-sentence comment is easy. Making a comment that accomplishes a great deal with one sentence is a lot more valuable.

Replies from: jimrandomh, Benito, gjm
comment by jimrandomh · 2018-09-04T22:27:09.365Z · LW(p) · GW(p)

While ideas must compete for attention, so too must criticisms. I've been lead to believe that, somewhere in this thread, there is a good criticism of the top-level post. I spent some time looking for it, and what I found was a whole lot of miscommunication, criticism of things that don't quite match what was written, and general muddle. You aren't just asking Benquo to do a lot of work to avoid those miscommunications, you're also asking the people who read your comments to do a lot of work to determine whether your comment is based on a miscommunication or not.

Setting too high a bar for criticism creates an echo chamber; but setting too low a bar does too, by obscuring the real arguments in a place where people can't find them without a lot of whole lot of time.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-04T23:46:16.911Z · LW(p) · GW(p)

I am not aware of any miscommunication that took place in my direction. Certainly, there has been misunderstanding of what I said. There has also been a lot of explaining, in detail and at length, on my part. But not so much vice-versa. Could you point out what idea of the OP you think I have misunderstood, and what attempts were made by Benquo to clarify it?

I have linked this post to a number of people, off Less Wrong. None of them had any trouble locating and understanding my criticisms; and I did repeat them several times, in several ways. To be honest, your comment perplexes me.

comment by Ben Pace (Benito) · 2018-09-05T07:29:56.672Z · LW(p) · GW(p)

As Eliezer is wont to say, things are often bad because the way in which they are bad is a Nash equilibrium. If I attempt to apply it here, it suggests we need both a great generative and a great evaluative process before the standards problem is solved, at the same time as the actually-having-a-community-who-likes-to-contribute-thoughtful-and-effortful-essays-about-important-topics problem is solved, and only having one solved does not solve the problem.

I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that's fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the evaluative before the whole system works, and while we've not reached there you're correct to be worried and want to enforce the standards yourself with low-effort comments (and I don't mean to imply the comments don't often contain implicit within them very good ideas).

But unfortunately, given your low-effort criticism feels so aggressive (according to me, the mods, and most writers I talk to in the rationality community), this is just going to destroy the first stage before we get the second. If you write further comments in this pattern which I have pointed to above, I will not continue to spend hours trying to pass your ITT and responding; I will just give you warnings and suspensions.

I may write another comment in this thread if there is something simple to clarify or something, but otherwise this is my last comment in this thread.

Replies from: SaidAchmiz, Benito
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T07:53:15.563Z · LW(p) · GW(p)

Without commenting on most of the rest of what you’ve said, I do want to note briefly that—

… spend hours trying to pass your ITT …

—if you are referring to this comment of yours [LW(p) · GW(p)], then I daresay the hours spent did not end up being productive (insofar as the state goal does not seem to have been reached). I appreciate, I suppose, the motivation behind the effort; but am dubious about the value of such things in general (especially extrapolating from this example).

That aside—I wish you luck, as always, with your efforts, and intend to continue doing what I can to help them succeed.

Replies from: Ikaxas, Benito
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-05T17:38:06.772Z · LW(p) · GW(p)

This is the first point at which I, at least, saw any indication that you thought Ben's attempt to pass your ITT was anything less than completely accurate. If you thought his summary of your position wasn't accurate, why didn't you say so earlier ? Your response to the comment of his that you linked gave no indication of that, and thus seemed to give the impression that you thought it was an accurate summary (if there are places where you stated that you thought the summary wasn't accurate and I simply missed it, feel free to point this out). My understanding is that often, when person A writes up a summary of what they believe to be person B's position, the purpose is to ensure that the two are on the same page (not in the sense of agreeing, but in the sense that A understands what B is claiming). Thus, I think person A often hopes that person B will either confirm that "yes, that's a pretty accurate summary of my position," or "well, parts of that are correct, but it differs from my actual position in ways 1, 2, and 3" or "no, you've completely misunderstood what I'm trying to say. Actually, I was trying to say [summary of person B's position]."

To be perfectly clear, an underlying premise of this is that communication is hard, and thus that two people can be talking past each other even if both are putting in what feels like a normal amount of effort to write clearly and to understand what the other is saying. This implies that if a disagreement persists, one of the first things to try is to slow down for a moment and get clear on what each person is actually saying, which requires putting in more than what feels like a normal amount of effort, because what feels like a normal amount of effort is often not enough to actually facilitate understanding. I'm getting a vibe that you disagree with this line of thought. Is that correct? If so, where exactly do you disagree?

Replies from: SaidAchmiz, Benquo
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T18:15:05.235Z · LW(p) · GW(p)

Out of politeness, and courtesy to Ben, I had hoped to avoid a head-on discussion of this topic. However, you make good points; and, in any case, given that you’ve called attention to this point, certainly it would be imprudent not to respond. So here goes, and I hope that Ben does not take this personally; the sentiment expressed in the grandparent still stands.

The truth is, Ben’s comment is an excellent example of why I am skeptical of “interpretive labor”, as well as related concepts like “principle of charity” (which was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere). When I read Ben’s comment, what I see is the following:

  1. Perfectly clear, straightforward language (quoted from my comments) that unambiguously and effectively conveys my points, “paraphrased” in such a way that the paraphrasing is worse in almost every way than the original: more confused, less accurate, less precise, less specific.
  2. My viewpoints (which, as mentioned, had been expressed quite clearly, and needed no rephrasing at all) distorted into caricatures of themselves.
  3. A strange mix of more-or-less passable (if degraded) portrayals of my points, plus some caricatures / strawmen / rounding-to-the-nearest-cliche, plus some irrelevant additions, that manages to turn the entire summary of my views into a mishmash, of highly dubious value.

Ben indicates [LW(p) · GW(p)] that he spent hours reading my commentary, trying to understand my views, and writing the comment in question (and I have no reason to doubt this). But if one may spend hours on such a thing, and end up with such disappointing results, what’s the point?

What’s more, I see no indication in Ben’s post that he had the same estimate of the results of his efforts as I did. If the claim is “doing interpretive labor lets you understand your interlocutor, where a straightforward reading may lead you astray”, but the reality is “doing interpretive labor leaves you with the entirely erroneous impression that you’ve understood your interlocutor when in fact you haven’t, thus wasting your time not just for no benefit, but with a negative effect”, then, again—why do it?

I think person A often hopes that person B will either confirm that “yes, that’s a pretty accurate summary of my position,” or “well, parts of that are correct, but it differs from my actual position in ways 1, 2, and 3” or “no, you’ve completely misunderstood what I’m trying to say. Actually, I was trying to say [summary of person B’s position].”

One may hope for something like this, certainly. But in practice, I find that conversations like this can easily result from that sort of attitude:

Alice: It’s raining outside.

Bob, after thinking really hard: Hmm. What I hear you saying is that there’s some sort of precipitation, possibly coming from the sky but you don’t say that specifically.

Alice: … what? No, it’s… it’s just raining. Regular rain. Like, I literally mean exactly what I said. Right now, it is raining outside.

Bob, frowning: Alice, I really wish you’d express yourself more clearly, but if I’m understanding you correctly, you’re implying that the current weather in this location is uncomfortable to walk around in? And—I’m guessing, now, since you’re not clear on this point, but—also that it’s cloudy, and not sunny?

Alice:

Bob:

Alice: Dude. Just… it’s raining. This isn’t hard.

Bob, frowning some more and looking thoughtful: Hmm…

And so on.

So, yes, communication is hard. But it’s not clear at all that this sort of solution really solves anything.

And at the same time, sometimes there are just actual disagreements. I think maybe some folks in this conversation forget that, or don’t like to think about it, or… heck, I don’t know. I’m speculating here. But there’s a remarkable lack of acknowledgment, here, of the fact that sometimes someone is just wrong, and people are disagreeing with that person because he’s wrong, and they’re right.

Replies from: Raemon, Ikaxas, Ikaxas
comment by Raemon · 2018-09-05T19:14:06.568Z · LW(p) · GW(p)

Note: I will not be engaging in much depth here, but wanted to flag one particularly important point:

Perfectly clear, straightforward language (quoted from my comments) that unambiguously and effectively conveys my points, “paraphrased” in such a way that the paraphrasing is worse in almost every way than the original: more confused, less accurate, less precise, less specific.

No. If Ben did not successfully interpret your language, your language wasn't clear or unambiguous. The point of the ITT is the verify that any successful communication has taken place at all. If it hasn't, everything that happens after that is just time wasting.

Replies from: Ikaxas, SaidAchmiz
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-05T20:39:09.442Z · LW(p) · GW(p)

Yes, this, precisely this.

comment by Said Achmiz (SaidAchmiz) · 2018-09-05T20:51:34.174Z · LW(p) · GW(p)

I’m afraid I can’t agree with this, at all. But to get into the reasons why, I’d have to speak increasingly discourteously; I do not expect this to be a productive endeavor. Feel free to contact me privately if you are interested in my further views on this, but otherwise, I will also disengage.

comment by Vaughn Papenhausen (Ikaxas) · 2018-09-05T21:33:58.915Z · LW(p) · GW(p)

I see no indication in Ben’s post that he had the same estimate of the results of his efforts as I did.

This is exactly the problem that the ITT is trying to solve. Ben's interpretation of what you said is Ben's interpretation of what you said, whether he posts it or merely thinks it. If he merely thinks it, and then responds to you based on it, then he'll be responding to a misunderstanding of what you actually said and the conversation won't be productive. You'll think he understood you, he'll perhaps think he understood you, but he won't have understood you, and the conversation will not go well because of it.

But if he writes it out, then you can see that he didn't understand you, and help him understand what you actually meant before he tries to criticize something you didn't even actually say. But this kind of thing only works if both people cooperate a little bit. (Okay, that's a bit strong, I do think that the kind of thing Ben did has some benefit even though you didn't respond to it. But a lot of the benefit comes from the back and forth.)

if one may spend hours on such a thing, and end up with such disappointing results, what’s the point?

Again, this is merely evidence that communication is harder than it seems. Ben not writing down his interpretation of you doesn't magically make him understand you better. All it does is hide the fact that he didn't understand you, and when that fact is hidden it can cause problems that seem to come from nowhere.

If the claim is “doing interpretive labor lets you understand your interlocutor, where a straightforward reading may lead you astray”

That's not the claim at all. The claim is that the reading that seems straightforward to you may not be the reading that seems straightforward to Ben. So if Ben relies on what seems to him a "straightforward reading," he may be relying on a wrong reading of what you said, because you wanted to communicate something different.

but the reality is “doing interpretive labor leaves you with the entirely erroneous impression that you’ve understood your interlocutor when in fact you haven’t, thus wasting your time not just for no benefit, but with a negative effect”, then, again—why do it?

I mean, yes, maybe Ben thought that after writing all that he understood what you were saying. But if he misunderstood you have the power to correct that. And him putting forward the interpretation he thinks is correct gives you a jumping-off point for helping him to understand what you meant. Without that jumping-off point you would be shooting in the dark, throwing out different ways of rephrasing what you said until one stuck, or worse (as I've said several times now) you wouldn't realize he had misunderstood you at all.

sometimes there are just actual disagreements. I think maybe some folks in this conversation forget that, or don’t like to think about it, or… heck, I don’t know. I’m speculating here. But there’s a remarkable lack of acknowledgment, here, of the fact that sometimes someone is just wrong, and people are disagreeing with that person because he’s wrong, and they’re right.

Yes, but you can't hash out the substantive disagreements until you've sorted out any misunderstandings first. That would be like arguing about the population size of Athens when one of you thinks you're talking about Athens, Greece and the other thinks you're talking about Athens, Ohio.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T21:55:56.764Z · LW(p) · GW(p)

I mean, yes, maybe Ben thought that after writing all that he understood what you were saying. But if he misunderstood you have the power to correct that.

This, I think, is where we differ (well, this, and the relative value of spending time on “interpretive labor” vs. going ahead with the [what seems to you to be the] straightforward interpretation). I think that time spent thus is generally wasted (and sometimes, or often, even counterproductive), and I think that correcting misunderstandings that persist after such “interpretive labor” is not feasible in practice (at least, not by the direct route)—not to mention that attempting to do this anyway, detracts from the usefulness of the discussion.

comment by Vaughn Papenhausen (Ikaxas) · 2018-09-08T19:29:56.631Z · LW(p) · GW(p)

By the way, I'm curious why you say that the principle of charity "was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere." What do you think was the original, good form of the idea, what is the difference between that and the version the rationalist memesphere has adopted, and what is so bad about the rationalist version?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-08T21:56:58.691Z · LW(p) · GW(p)

The original, good form of the principle of charity… well, actually, one or another principle under this name is decades old, or perhaps millennia; but in our circles, we can trace it back to Scott’s first post on Slate Star Codex, which I will quote almost in full:

This blog does not have a subject, but it has an ethos. That ethos might be summed up as: charity over absurdity.

Absurdity is the natural human tendency to dismiss anything you disagree with as so stupid it doesn’t even deserve consideration. In fact, you are virtuous for not considering it, maybe even heroic! You’re refusing to dignify the evil peddlers of bunkum by acknowledging them as legitimate debate partners.

Charity is the ability to override that response. To assume that if you don’t understand how someone could possibly believe something as stupid as they do, that this is more likely a failure of understanding on your part than a failure of reason on theirs.

There are many things charity is not. Charity is not a fuzzy-headed caricature-pomo attempt to say no one can ever be sure they’re right or wrong about anything. Once you understand the reasons a belief is attractive to someone, you can go ahead and reject it as soundly as you want. Nor is it an obligation to spend time researching every crazy belief that might come your way. Time is valuable, and the less of it you waste on intellectual wild goose chases, the better.

It’s more like Chesterton’s Fence. G.K. Chesterton gave the example of a fence in the middle of nowhere. A traveller comes across it, thinks “I can’t think of any reason to have a fence out here, it sure was dumb to build one” and so takes it down. She is then gored by an angry bull who was being kept on the other side of the fence.

Chesterton’s point is that “I can’t think of any reason to have a fence out here” is the worst reason to remove a fence. Someone had a reason to put a fence up here, and if you can’t even imagine what it was, it probably means there’s something you’re missing about the situation and that you’re meddling in things you don’t understand. None of this precludes the traveller who knows that this was historically a cattle farming area but is now abandoned – ie the traveller who understands what’s going on – from taking down the fence.

As with fences, so with arguments. If you have no clue how someone could believe something, and so you decide it’s stupid, you are much like Chesterton’s traveler dismissing the fence (and philosophers, like travelers, are at high risk of stumbling across bull.)

(Bolding mine, italics in original.)

A fair and reasonable principle, I think. We might also extend it—as, indeed, it has often been extended—to the injunction that opponents, and their arguments, ought not be dismissed merely because they appear to be evil. (For example, if it seems like I am suggesting that kittens must be tortured at every opportunity—well, who knows, perhaps I am?—but it is uncharitable to assume this, and to dismiss and denounce me for it, unless I’ve said this explicitly, or you’ve made a reasonable attempt to elicit a clarification, and I’ve confirmed that I am saying just that.)

So that is the unimpeachable idea. And what is the corruption? There are several, actually. Here’s one:

Yeah, sorry for being imprecise in my language. Can you just be charitable and see that my statement make sense if you replace “VNM” by “Dutch book” ?

(Source. [LW(p) · GW(p)])

Here, the suggestion is that being “charitable” requires that I mentally replace one technical term with another, totally different, technical term, turning a statement that is perfectly coherent—not absurd, not insane—but wrong, into a different statement that is correct. Evidently I am expected to do this with every one of my interlocutor’s statements. So, then what? Do I just assume that whenever anyone says anything to me that I think is wrong, what they actually mean is something correct? Is it just impossible for people to be wrong? Can I never be surprised by people’s claims? Is “huh, so what you’re saying is X? really?” totally out of the question? (Never mind the question of how I’m supposed to know what to “correct” my interlocutor’s comments to—it isn’t like there’s always, or even often, just one possible “correct” interpretation!)

And then the other corruption is the other side of the same coin. It’s what happens when people do apply this form of the “principle of charity”, and end up having conversations like some I’ve had recently, where I’ve been on the receiving end of this “charity”: I say something fairly straightforward, and my interlocutor, applying the principle of charity, and believing the literal or straightforward interpretation of my words to be evil (or something), mentally transforms my comments into something different (and, presumably, non-evil), and responds to that. Communication has not taken place; my words have not been heard.

There are other corruptions, too, more subtle ones (examples of which I’d have to take some time to hunt for), but these are more than bad enough!

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-15T02:47:59.720Z · LW(p) · GW(p)

Thanks for this. Sorry it's taken me so long to reply here, didn't mean to let this conversation hang for so long. I completely agree with about 99% of what you wrote here. The 1% I'll hopefully address in the post I'm working on on this topic.

comment by Benquo · 2018-09-05T18:32:24.394Z · LW(p) · GW(p)

This substantially raised my estimate of how much harm Said's been causing from "annoying but mostly harmless" to "actively attacking good discourse for being good". I've switched my moderation policy to reign of terror because on future posts I intend to delete comments by Said that were as annoying as the initial exchange here. Not sure if that extends to other commenters, probably it should but I haven't had other problems this bad.

comment by Ben Pace (Benito) · 2018-09-05T08:06:40.561Z · LW(p) · GW(p)

nods Thank you, Said.

comment by Ben Pace (Benito) · 2018-09-11T23:25:41.727Z · LW(p) · GW(p)

This was now a week ago. The mod team discussed this a bit more, and I think it's the correct call to give Said an official warning (link [LW · GW]) for causing a significant number of negative experiences for other authors and commenters.

Said, this moderation call is different than most others, because I think there is a place for the kind of communication culture that you've advocated for, but LessWrong specifically is not that place, and it's important to be clear about what kind of culture we are aiming for. I don't think ill of you or that you are a bad person. Quite the opposite; as I've said above, I deeply appreciate a lot of the things you've build and advice you've given, and this is why I've tried to put in a lot of effort and care with my moderation comments and decisions here. I'm afraid I also think LessWrong will overall achieve its aims better if you stop commenting in (some of) the ways you have so far.

Said, if you receive a second official warning, it will come with a 1-month suspension. This will happen if another writer has an extensive interaction with you primarily based around you asking them to do a lot of interpretive labour and not providing the same in return, as I described in my main comment [LW(p) · GW(p)] in this thread.

comment by gjm · 2018-09-05T07:59:21.285Z · LW(p) · GW(p)

I am not at all sure it's always true that posting nothing at all is easier than posting a short, snarky comment. The temptation to do the latter can be almost overwhelming.

And just as ideas are a dime a dozen, so are criticisms. Your arguments against disincentivizing criticism seem to me to have parallel arguments against disincentivizing posting; and your arguments for harsh criticism of top-level posts seem to me to have parallel arguments for harsh criticism of critical comments. (Of course the two aren't exactly equivalent, not least because top-level posts are more visible than critical comments. Still, I think all the arguments cut both ways.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T08:25:15.401Z · LW(p) · GW(p)

I am not at all sure it’s always true that posting nothing at all is easier than posting a short, snarky comment. The temptation to do the latter can be almost overwhelming.

True enough! That temptation falls away, however, if one simply stops reading.

As for the rest—in principle, you’re entirely correct. In practice, I do not think what you say is true. For one thing, as I mentioned, even in the extreme case where literally no one posts anything at all, there nonetheless remain plenty of ideas to examine. But even that aside, the problem is this: once you sweep aside those ideas which are just trolling, or explicitly known to be false, or have the Time Cube nature, you’re still left with a massive pile of what might be good but what could easily be (and likely is) total nonsense (as well as other possibilities like “good but ultimately not useful”, “subtly wrong”, etc.).

On the other hand, once you sweep aside those criticisms which are nothing but rudeness or abuse, or obvious trolling, etc., what you’re left with is… not much, actually. There really is a shortage of good criticism. How many of the posts on Less Wrong, within—say—the past six months, have received almost no really useful scrutiny? It’s not none!

Finally, as for this—

… your arguments for harsh criticism of top-level posts seem to me to have parallel arguments for harsh criticism of critical comments

As with so many things: one person’s modus tollens is another’s modus ponens.

comment by Vaniver · 2018-09-04T18:32:43.569Z · LW(p) · GW(p)

I think there's a problem here where "broad attention" and "harsh attention" are different tools that suggest different thresholds. I might think, for example, that a post announcing open registration for EA Global should be shown not just to everyone visiting the EA Forum, but also everyone subscribed to the EA Forum RSS, without thinking that it is a genuine intellectual accomplishment that will be referred to for years to come. I might also post an idea that I think should be held to high standards but don't expect to be of wide interest, like my thoughts on how map design influences strategy games and what designs are best suited for a particular game.

My position is that subreddit-like things are the correct way to separate out rules (both because it's a natural unit of moderation, and it implies rulesets are mutually exclusive, and it makes visual presentation easy) and tag-like things are the correct way to separate out topics (because topics aren't mutually exclusive and don't obviously imply different rules). A version of lesswrong that has two subreddits, with names like 'soft' and 'sharp', seems like it would both offer a region for exploratory efforts and a region for solid accumulation, with users by default looking at both grouped together (but colored differently, perhaps).

One of the reasons why that vision seemed low priority (we might be getting to tags in the next few months, for example) was that, to the best of my knowledge, no poster was clamoring for the sharp subreddit. Most of what I would post to main in previous days would go there, and some of the posts I'm working on now are targeted at essentially that, but it's much easier to post sharp posts in soft than it is to post soft posts in sharp.

Another reason why that vision seemed low priority was the belief that much of progress happens by transmission of 'half-baked' ideas, since the various pieces necessary to get the fully baked idea may reside in different people, or because one half-formed idea kicks off a train of thought in someone else that leads somewhere good. The reason to expose lots of people to a Nietzschean maxim is not because you think it is true and that they should just adopt it, but because you expect them to get something useful out of reacting to it. Or, to take Paul Graham's post on essays, it devalues attempts to raise questions (even if you don't have an airtight answer yet) compared to arguments for positions.

Under this model, requiring that ideas survive harsh scrutiny before spreading them widely kills the ability to make this sort of collaborative progress (among people who haven't already sorted into private groups), and perhaps more importantly gives a misleading idea of how progress is generated. If one is trying to learn to do math like a professional mathematician, it is much more helpful to watch their day-to-day activities and chatter with colleagues than it is to read their published papers, because their published papers sweep much of the real work under the rug. Often one generates a hideous proof and then searches more and finds a prettier proof, but without the hideous proof one might have given up. And one doesn't just absorb until one is fully capable of producing professional math; one interleaves observation with attempts to do the labor oneself, discovering which bits of it are hard and getting feedback on one's products.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-04T20:32:53.285Z · LW(p) · GW(p)

I might think, for example, that a post announcing open registration for EA Global should be shown not just to everyone visiting the EA Forum, but also everyone subscribed to the EA Forum RSS, without thinking that it is a genuine intellectual accomplishment that will be referred to for years to come.

This seems like an excellent argument for dynamic RSS feeds (which I am almost certain is a point I’ve made to Oliver Habryka in a past conversation). Such a feature, plus a robust tagging system, would solve all problems of the sort you describe here.

I might also post an idea that I think should be held to high standards but don’t expect to be of wide interest, like my thoughts on how map design influences strategy games and what designs are best suited for a particular game.

It’s not clear why a post like this should be on Less Wrong at all, but if it must be, then there seems to be nothing stopping you from prefacing it with “please apply frontpage-level scrutiny to this one, but I don’t actually want this promoted to the frontpage”.

… tag-like things …

I think that a good tagging system should, indeed, be a high priority in features to add to Less Wrong.

… no poster was clamoring for the sharp subreddit …

Well, I was not clamoring for it because I was under the impression that the entire front page of Less Wrong was, as you say, the “sharp subreddit”. That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.

Another reason why that vision seemed low priority was the belief that much of progress happens by transmission of ‘half-baked’ ideas, since the various pieces necessary to get the fully baked idea may reside in different people, or because one half-formed idea kicks off a train of thought in someone else that leads somewhere good.

I should like to see this belief defended. I am skeptical. But in any case, that’s what the personal blogs are for, no?

The reason to expose lots of people to a Nietzschean maxim is not because you think it is true and that they should just adopt it, but because you expect them to get something useful out of reacting to it.

Your meaning here is obscure to me, I’m afraid…

Or, to take Paul Graham’s post on essays, it devalues attempts to raise questions (even if you don’t have an airtight answer yet) compared to arguments for positions.

I consider that to be one of Graham’s weakest pieces of writing. At best, it’s useless rambling. At worst, it’s tantamount to “In Defense of Insight Porn”.

… requiring that ideas survive harsh scrutiny before spreading them widely kills the ability to make this sort of collaborative progress …

But this is precisely why I think it’s tremendously valuable that this harsh scrutiny take place in public. A post is promoted to the front page, and there, it’s scrutinized, and its ideas are discussed, etc.

The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training. They’re not just “anyone with an internet connection”. A professional mathematicians’s half-baked idea on a mathematical topic is simply not comparable with a random internet person’s (or even a random “rationalist”’s) half-baked idea on an arbitrary topic.

Replies from: Vaniver
comment by Vaniver · 2018-09-04T21:48:35.782Z · LW(p) · GW(p)
That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.

How do you expect to solve this problem? The primary thing I've heard form you is defense of your style of commenting and its role in the epistemic environment, and regardless of whether or not I agree with it, the problem that I'm trying to solve is getting more good content on LW, because that's how LW seems useful for solving problems related to advancing human rationality and avoiding human extinction. When we ask people who made top tier posts before why they don't make them now, or they put them elsewhere, the answer is resoundingly not "we were put off by mediocre content on LW" but "we were put off by commenters who were mean and made writing for LW unpleasant."

Keep in mind that the problem here is not "how do we make LW a minimally acceptable place to post things?" but "how do we make posting for LW a better strategy than other competitors?". I could put effort into editing my post on a Bayesian view of critical rationalism that's been sitting in my Google Docs drafts for months to finally publish it on LW, or I could be satisfied that it was seen by the primary person I wrote it for, and just let it rot. I could spend some more hours reading a textbook to review for LessWrong, or I could host a dinner party in Berkeley and talk to other rationalists in person.

The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training.

I notice some confusion here; Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training. Rationality, of course, is much more in its infancy than mathematics is, and so we should expect professional mathematicians to be better at mathematics than rationalists are at rationality. It's also the case that people in mathematics grad school often make bad mathematical arguments that their peers and instructors should attempt to correct, but when they do so it's typically with a level of professional courtesy that, while blunt, is rarely insulting.

So it seems like the position you're taking here is either something like "no rationalist has enough reputation that they deserve something like professional courtesy", "some rationalists do, but it's perhaps a dozen of them instead of hundreds," or "concise sarcasm is what professional courtesy look like," or something harder for me to construct.

It seems to me that LW sometimes has problems with mediocre commenters who are more prolific than they are insightful, who need to somehow be dissuaded from clogging up the site. But if you're putting Benquo in that category, I really don't see how we're going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?

Replies from: gjm, Benquo, nshepperd, SaidAchmiz
comment by gjm · 2018-09-05T07:53:14.795Z · LW(p) · GW(p)

In this very interesting discussion I mostly agree with you and Ben, but one thing in the comment above seems to me importantly wrong in a way that's relevant:

When we ask people who made top tier posts before why they don't make them now, or they put them elsewhere, the answer is resoundingly not "we were put off by mediocre content on LW" but "we were put off by commenters who were mean and made writing for LW unpleasant."

I bet that's true. But you also need to consider people who never posted to LW at all but, if they had, would have made top-tier posts. Mediocre content is (I think) more likely to account for them than for people who were top-tier posters but then went away.

(Please don't take me to be saying "... and therefore we should be rude to people whose postings we think are mediocre, so that they go away and stop putting off the really good people". I am not at all convinced that that is a good idea.)

Replies from: Benquo
comment by Benquo · 2018-09-05T18:40:55.268Z · LW(p) · GW(p)

I agree that meh content can be harmful in that way. I don't think that Said's successfully selectively discouraging meh content.

comment by Benquo · 2018-09-05T01:21:49.274Z · LW(p) · GW(p)

I mostly agree, but one part seems a bit off and I feel like I should be on the record about it:

Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training.

It's evidence that I'm a top example of the particular sort of rationality culture that LW is clustered around, and I think that's enough to make the argument you're trying to make, but being good at getting upvotes for writing about rationality is different in some important ways from being rational, in ways not captured by the analogy to math grad school.

Replies from: Vaniver
comment by Vaniver · 2018-09-05T06:18:54.273Z · LW(p) · GW(p)

I agree the analogy is not perfect, but I do think it's better than you're suggesting; in particular, it seems to me like going to math grad school as opposed to doing other things that require high mathematical ability (like quantitative finance, or going to physics grad school, or various styles of programming) is related to "writing about rationality rather than doing other things with rationality." Like, many of the most rational people I know don't ever post on LW because that doesn't connect to their goals; similarly, many of the most mathematically talented people I know didn't go to math grad school, because they ran the numbers on doing it and they didn't add up.

But to restate the core point, I was trying to get at the question of "who do you think is worthy of not being sarcastic towards?", because if the answer is something like "yeah, using sarcasm on the core LW userbase seems proper" this seems highly related to the question of "is this person making LW better or worse?".

comment by nshepperd · 2018-09-09T01:26:56.633Z · LW(p) · GW(p)

But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?

I'd just like to comment that in my opinion, if we only had one post a month on LW, but it was guaranteed to be good and insightful and useful and relevant to the practice of rationality and not wrong in any way, that would be awesome.

The world is full of content. Attention is what is scarce.

comment by Said Achmiz (SaidAchmiz) · 2018-09-04T23:41:34.256Z · LW(p) · GW(p)

That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.

How do you expect to solve this problem?

By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.

… the problem that I’m trying to solve is getting more good content on LW

But this can only be a misguided goal. What is “good content”? Why do you want it? That is far too generic a desideratum! If you just want “good content”, and you don’t really care what kind of “good content”, you’ll inevitably suffer value / focus drift; and if you always want more “good content” without specific goals concerning how much and what kind and what is it for, then you’ll… well, you’ll have the sort of problem you’re having now, to be honest.

… because that’s how LW seems useful for solving problems related to advancing human rationality and avoiding human extinction.

I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.

The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training.

I notice some confusion here; Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training.

This is a shocking statement. I had to reread this sentence several times before I could believe that I’d read it right.

… just what, exactly, do you mean by “rationality”, that could make this claim true?!

So it seems like the position you’re taking here is either something like “no rationalist has enough reputation that they deserve something like professional courtesy”, “some rationalists do, but it’s perhaps a dozen of them instead of hundreds,” or “concise sarcasm is what professional courtesy look like,” or something harder for me to construct.

Both the first and the second are plausible (“reputation” is not really the right concept here, but I’ll let it stand for now). The third is also near enough to truth.

Let’s skip all the borderline examples and go straight to the top. Among “rationalists”, who has the highest reputation? Who is Top Rationalist? Obviously, it’s Eliezer. (Well, some people disagree. Fine. I think it’s Eliezer; I think you’re likely to agree; in any case he makes the top five easily, yes?)

I have great respect for Eliezer. I admire his work. I have said many times that the Sequences are tremendously important, well-written, etc. What’s more, though I’ve only met Eliezer a couple of times, it’s always seemed to me that he’s a decent guy, and I have absolutely nothing against him as a person.

But I’ve also read some of the stuff that Eliezer has posted on Facebook, over the course of the last half-decade or more. Some of it has been well-written and insightful. Some of it has been sheer absurdity, and if he had posted it on Less Wrong, you can bet that I would not spare those posts from the same unsentimental and blunt scrutiny. To do any less would be intellectual dishonesty.

Even the cleverest and best of us can produce nonsense. If no one scrutinizes our output, or if we’re surrounded only by “critics” who avoid anything substantive or harsh, the nonsense will soon dominate. This is worse than not having a Less Wrong at all.

It seems to me that LW sometimes has problems with mediocre commenters who are more prolific than they are insightful, who need to somehow be dissuaded from clogging up the site. But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?

But my suggestion [LW(p) · GW(p)] answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?

Replies from: Vaniver, Benquo
comment by Vaniver · 2018-09-05T04:40:12.225Z · LW(p) · GW(p)
By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.

I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming. How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]

I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.

As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.

But my suggestion [LW(p) · GW(p)] answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?

Your explanation doesn't suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don't do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]

Perhaps another angle on the problem: there is a benefit to having one conversational locus [LW · GW]. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the 'having one conversational locus' world. It seems to me like you're making a claim of the form "the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms", and I disagree with that, because of the aforementioned models of how progress works.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T05:06:36.163Z · LW(p) · GW(p)

I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming.

Uh, how’s that? Anyway, even if we grant that you tried this, well… no offense meant, but maybe you tried it the wrong way? “We tried doing something like this, once, and it didn’t work out, therefore it’s impossible or at least not worth trying” is hardly what you’d call “solid logic”.

How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]

This is, indeed, a serious question, and one well worth considering in detail and at length, not just as a tangent to a tangent, deep in one subthread of an unrelated comments section.

But here’s one answer, given with the understanding that this is a brief sketch, and not the whole answer:

Prestige and value attract contributors. Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction. When you can say to someone, “I think your writing on <topic> is good enough for Less Wrong” and have that be a credible and unusual compliment, you will easily be able to find contributors. When you’ve created a culture where you can post on Less Wrong and there, get the best, most insightful, most no-nonsense, cuts-to-the-heart-of-the-matter criticism, people who are truly interested in perfecting their ideas will want to post here, and to submit to scrutiny.

I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.

As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.

Not so easy, I regret to say…

But my suggestion [LW(p) · GW(p)] answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?

Your explanation doesn’t suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don’t do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]

See above for why authors would want to do this. As for “a class of dedicated curators who would rewrite their posts”, I never suggested anything remotely like this, and would never suggest it.

Perhaps another angle on the problem: there is a benefit to having one conversational locus. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the ‘having one conversational locus’ world. It seems to me like you’re making a claim of the form “the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms”, and I disagree with that, because of the aforementioned models of how progress works.

Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well. This is definitely a “there is a technical solution which cuts right through the Gordian knot of social problems” case.

Replies from: Vaniver, Benquo, lahwran
comment by Vaniver · 2018-09-05T06:42:19.757Z · LW(p) · GW(p)
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction.

Where would you point to as a previous example of success in this regard? I don't think the golden age of Less Wrong counts, as it seems to me the primary reason LessWrong was ever known as a place with high standards is because Eliezer's writing and thinking were exceptional enough to draw together a group of people who found it interesting, and that group was a pretty high-caliber group. But it's not like they came here because of the insightful comments; they came here for the posts, and read the comments because they happened to be insightful (and interested in a particular mode of communication over point-seeking status games). When the same commenters were around, but the good post-writers disappeared or slowed down, the site slowly withered as the good commenters stopped checking because there weren't any good posts.

There have been a few examples of people coming to LessWrong with an idea to sell, essentially, which I think is the primary group that you would attract by having a reputation as a forum that only good ideas survive. I don't recall many of them becoming solid contributors, but note that this is possibly a memory selection effect; when I think of "someone attracted to LW because of the prestige of us agreeing with them" I think of many people whose one-track focuses were not impressive, when perhaps someone I respect originally came to LW for those reasons and then had other interests as well.

With regards to the "solid logic" comment, do give us some credit for having thought through this issue and collected what data we can. From my point of view, having tried to sample the community's impressions, the only people who have said the equivalent of "ah, criticism will make the site better, even if it's annoying" are people who are the obvious suspects when post writers say the equivalent of "yeah, I stopped posting on Less Wrong because the comments were annoyingly nitpicky rather than focusing on the core of the point."

I do want to be clear that 'high-standards' and 'annoying' are different dimensions, here, and we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word "impossible" makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way. By the way I use the word "smooth", things point in the opposite direction. [And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn't been written yet.]

Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well.

Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o'clock news, as opposed to decentralized communication, where different people are reading different blogs and can't refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night's Monday Night Football game with another football fan and two gamers trying to discuss their previous night's video gaming with each other; even if they happened to play the same game, they almost certainly weren't in the same match.

The thing that tagging helps you do is say "this post is more interesting to people who care about life extension research than people who don't", but that means you don't show it to people who don't care about life extension, and so when someone chats with someone else about Sarah Constantin's analysis of a particular line of research, the other person is more likely to say "huh?" than if they sometimes get writings about a topic that doesn't natively interest them through a curated feed.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T07:24:19.753Z · LW(p) · GW(p)

Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well.

Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o’clock news, as opposed to decentralized communication, where different people are reading different blogs and can’t refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night’s Monday Night Football game with another football fan and two gamers trying to discuss their previous night’s video gaming with each other; even if they happened to play the same game, they almost certainly weren’t in the same match.

The thing that tagging helps you do is say “this post is more interesting to people who care about life extension research than people who don’t”, but that means you don’t show it to people who don’t care about life extension, and so when someone chats with someone else about Sarah Constantin’s analysis of a particular line of research, the other person is more likely to say “huh?” than if they sometimes get writings about a topic that doesn’t natively interest them through a curated feed.

We might not be talking about the same thing (in technical/implementation terms), as what you say does not apply to what I had in mind. (It’s awkward to hash this out in via comments like this; I’d be happy to discuss this in detail in a real-time chat medium like IRC.)

comment by Said Achmiz (SaidAchmiz) · 2018-09-05T14:24:20.678Z · LW(p) · GW(p)

… we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word “impossible” makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way.

“Pedantically” is a caricature, I think; I would say “straightforwardly”—but then, we have a live example of what we’re referring to, so terminology is not crucial. That aside, I stand by this point, and reaffirm it.

I am deeply skeptical of “interpretive labor”, at least as you seem to use the term.[1] Most examples that I can recall having seen of it, around here, seem to me to have affected the conversation negatively. (For instance, your example [LW(p) · GW(p)] elsethread is exactly what I’d prefer not to see from my interlocutors.)

In particular, this—

repair small errors in a transparent way

—doesn’t actually happen, as far as I can tell. What happens instead is that errors are compounded and complicated, while simultaneously being swept under the rug. It seems to me that this sort of “interpretive labor” does much to confuse and muddle discussions on Less Wrong, while effecting the appearance of “smooth” and productive communication.

By the way I use the word “smooth”, things point in the opposite direction.

I don’t know… I think it’s at least possible that we’re using the word in basically the same way, but disagree on what effects various behaviors have. But perhaps this point is worth discussing on its own (if, perhaps, not in this thread): what is this “smoothness” property of discussions, what why is it desirable? (Or is it?)

[And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn’t been written yet.]

This sounds like a post I’d enjoy reading!


[1] Where is this term even from, by the way…?

Replies from: Benquo
comment by Benquo · 2018-09-05T14:39:37.316Z · LW(p) · GW(p)

https://acesounderglass.com/2015/06/09/interpretive-labor/

comment by Benquo · 2018-09-05T14:42:12.074Z · LW(p) · GW(p)
and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride

This seems like a proposal to make LW contentless, with lots of vacuously true statements.

comment by the gears to ascension (lahwran) · 2018-09-05T06:19:27.047Z · LW(p) · GW(p)
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated

They should ban you for how you're interacting right now. I don't know why they're taking shit with your dodging the issue, but you either don't have the ability to figure out when someone is correctly calling you out, or aren't playing nice. Your brand of bullshit is a major reason I've avoided less wrong, and I want it gone. I want people to critique my ideas ruthlessly and not critique me as a person with Deservingness at all. if you think being an asshole is normal, go away. you don't have to hold back on what you think the problems are, but I sure as hell expect you to say what you think the problems are without implying I said them wrong.

Replies from: Benito
comment by Ben Pace (Benito) · 2018-09-05T07:03:30.402Z · LW(p) · GW(p)

Lahwran, I downvoted your comment because I think it should be costly to write something that lowers the tone like this, but I appreciate you saying that this is the reason you left LW, and you might be right that I'm being too civil relative to the effects Said is directly having.

I've put in a bunch of effort to trade models of good discourse, but this conversation is heading towards its close. As I've said, if Said writes these sorts of comments in future, I'll be hitting fairly hard with mod tools, regardless of his intentions. Notice that this brand of bullsh*t is otherwise largely gone from LW since the re-launch in March - Said has been an especially competent and productive individual who has this style of online interaction, so I've not wanted to dissuade him as strongly as the rest who've left, but my patience has since worn thin on this front, and I won't be putting up with it in future.

comment by Benquo · 2018-09-05T01:46:03.749Z · LW(p) · GW(p)
But this can only be a misguided goal. What is “good content”? Why do you want it? That is far too generica desideratum! If you just want “good content”, and you don’t really care what kind of “good content”, you’ll inevitably suffer value / focus drift; and if you always want more “good content” without specificgoals concerning how much and what kind and what is it for, then you’ll… well, you’ll have the sort of problem you’re having now, to be honest.

It seems like, having interpreted Vaniver as making an obvious error, you decided to argue at length against it instead of considering that he might have meant something else. This is tedious and is punishing Vaniver for not tediously overspecifying everything.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T02:09:28.662Z · LW(p) · GW(p)

This attitude makes very little sense.

Suppose that one Alice writes something which I, on the straightforward reading, consider to be definitely and clearly wrong. I read it and imagine two possibilities:

(A) Alice meant exactly what it seems like she wrote.

Presumably, then, Alice disagrees with my judgment of what she wrote as being definitely and clearly wrong. Well, there is nothing unusual in this; I have often encountered cases where people hold views which I consider to be definitely and clearly wrong, and vice-versa. (Surely you can say the same?)

In this case, what else is there to do but to respond to what Alice wrote?

(B) Alice meant something other than what it seems like she wrote.

What might that be? Who knows. I could try to guess what Alice meant. However, that is impossible. So I won’t try. If Alice didn’t mean the thing that it seems, on a straightforward reading, like she meant, then what she actually meant could be anything at all.

But suppose I go ahead and try anyway, I come up with some possible thing that Alice could’ve meant. Do I have any reason to conclude that this is the only possibility for what Alice could’ve meant? I do not. I might be able to think longer, and come up with other possibilities. None of them would offer me any reason to assume that that one is what Alice meant.

And suppose I do pick out (via some mysterious and, no doubt, dubious method) some particular alternate meaning for Alice’s words. Well, and is that correct, then, or wrong? If it’s wrong, then I will argue the point, presumably. But then I will be in the strange position of saying something like this:

“Alice, you wrote X. However, X is obviously wrong. So you couldn’t have meant that. You instead meant Y, probably. But that’s still wrong, and here’s why.”

Have I any reason at all to expect that Alice won’t come back with “Actually, no, I did mean X; why do you say it’s obviously wrong?!”, or “Actually, no, I meant Z!”? None at all. And I’ll have wasted my time, and for what?

This sort of thing is almost always a pointless and terrible way of carrying on a discussion, why is why I don’t and won’t do it.

Replies from: Vaniver, Benquo
comment by Vaniver · 2018-09-05T04:11:39.888Z · LW(p) · GW(p)
However, that is impossible.

Consider response A:

"I often successfully guess what people meant; it being impossible comes as a surprise to me. Are you claiming this has never happened to you?"

And response B:

Ah, Said likely meant that it is impossible to reliably infer Alice's meaning, rather than occasionally doing so. But is a strategy where one never infers truly superior to a strategy where one infers, and demonstrates that they're doing so such that a flat contradiction can be easily corrected?

[Incidentally, I believe this is the disjunction Benquo is pointing at; you seem to imply that either you interpret Alice literally, or you misinterpret Alice, which excludes the case where you correctly interpret Alice.]

[EDIT: I made a mistake in this comment, where response B was originally [what someone would say after doing that substitution], and then I said "wait, it's not obvious where that came from, I should put the thoughts that would generate that response" and didn't apply the same mental movement to say "wait, it's not obvious that response A is a flat response and response B is a thought process that would generate a response, which are different types, I should call that out."]

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T04:28:15.429Z · LW(p) · GW(p)

Yes, exactly; response A would be the more reasonable one, and more conducive to a smooth continuation of the discussion. So, responding to that one:

“Impossible” in a social context means “basically never happens, and if it does happen then it is probably by accident” (rather than “the laws of physics forbid it!”). Also, it is, of course, possible to guess what someone means by sheer dumb luck—picking an interpretation at random out of some pool of possibilities, no matter how unlikely-seeming, and managing by chance to be right.

But, I can’t remember a time when I’ve read what someone said, rejected the obvious (but obviously wrong) interpretation, tried to guess what they actually meant, and succeeded. When I’ve tried, the actual thing that (as it turned out) they meant was always something which I could never have even imagined as a hypothesis, much less picked out as the likeliest meaning. (And, conversely, when someone else has tried to interpret my comments in symmetric situations, the result has been the same.)

In my experience, this is true: for all practical purposes, either you understand what someone meant, or it’s impossible to guess what they could’ve meant instead.

[Incidentally, I believe this is the disjunction Benquo is pointing at; you seem to imply that either you interpret Alice literally, or you misinterpret Alice, which excludes the case where you correctly interpret Alice.]

This is not what I’m implying, because it’s not what I’m saying and what I’m saying has a straightforward meaning that isn’t this. See this comment [LW(p) · GW(p)]. “Literally” is a strawman (not an intentional one, of course, I’m assuming); it can seem like Alice means something, without that necessarily being anything like the “literal reading” of her words (which in any case is a red herring); “straightforward” is what I said, remember.

Edit: I don’t know where all this downvoting is coming from; why is the parent at −2? I did not downvote it, in any case…

Replies from: Raemon
comment by Raemon · 2018-09-05T06:57:20.744Z · LW(p) · GW(p)

A couple more things I think your disjunction is missing.

1) If you don't know what Alice means, instead of guessing, you can ask.

(alternately, you can offer a brief guess, and give them the opportunity to clarify. This has the benefit of training your ability to infer more about what people mean). You can do all this without making any arguments or judgments until you actually know what Alice meant.

2) Your phrasing implies that if Alice writes something that "seems to straightforwardly mean something, and Alice meant something else", that the issue is that Alice failed to write adequately. But it's also possible for the failure to be on the part of your comprehension rather than Alice's writing. (This might be because Alice is writing for an audience of people with more context/background than you, or different life experiences than you)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T07:46:15.810Z · LW(p) · GW(p)

Re: asking: well, sure. But what level of confidence in having understood what someone said should prompt asking them for clarification?

If the answer is “anything less than 100%”, then you just never respond directly to anything anyone writes, without first going through an elaborate dance of “before I respond or comment, let me verify that this is what you meant: [insert re-stating of the entirety of the post or comment you’re responding to]”; then, after they say “yes, that is what I meant”, you respond; then, before they respond to you, they first go “now, let me make sure I understand your response: [insert re-stating of the entirety of your response]” … and so on.

Obviously, this is no way to have a discussion.

But if there is some threshold of confidence in having understood that licenses you to go ahead and respond, without first asking whether your interlocutor meant the thing that it seems like they meant, then… well, you’re going to have situations where it turns out that actually, they meant something else.

Unless, of course, what you’re proposing is a policy of always asking for clarification if you disagree, or think that your interlocutor is mistaken, etc.? But then what you’re doing is imposing a greater cost on dissenting responses than assenting ones. Is this really what you want?

Re: did Alice fail to communicate or did I fail to comprehend: well, the question of “who is responsible for successful communication—author or audience?” is hardly a new one. Certainly any answer other than “it is, to some extent, a collaborative effort” is clearly wrong.

The question is, just how much is “some extent”? It is, of course, quite possible to be so pedantic, so literal-minded, so all-around impenetrable, that even the most heroically patient and singularly clear of authors cannot get through to you. On the other hand, it’s also possible to write sloppily, or to just plain have bad ideas. (If I write something that is wrong, and you express your disagreement, and I say “no, you’ve misunderstood, actually I’m right”, is it fair to say that you’ve failed in your duty as a conscientious reader?)

In any case, the matter seems somewhat academic. As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said. (Certainly I’ve seen no one posting any corrections to my reading of the OP. Mere claims that I’ve misunderstood, with no elaboration, are hardly convincing!)

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-06T04:51:30.729Z · LW(p) · GW(p)

what level of confidence in having understood what someone said should prompt asking them for clarification?

This is an isolated demand for rigor. Obviously there's no precise level of confidence, in percentages, that should prompt asking clarification. As with many things, context matters. Sometimes, what indicates a need to ask for clarification is that a disagreement persists for longer than it seems like it ought to (indicating that there might be something deeper at work, like a misunderstanding). Sometimes, what indicates this is your interlocutor saying something that seems absurd or obviously mistaken. The second seems relevant in the immediate instance, given that what prompted this line of discussion was your taking Vaniver at his word when he said something that seemed, to you, obviously mistaken.

Note that I say "obviously mistaken." If your interlocutor says something that seems mistaken, that's one thing, and as you say, it shouldn't always prompt a request for clarification; sometimes there's just a simple disagreement in play. But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn't wont to say obviously wrong things, that may indicate that there is something they see that you don't, in which case it would be useful to ask for clarification.

In this particular case, it seems to me that "good content" could be vacuous, or it could be a stand-in for something like "content that meets some standards which I vaguely have in mind but don't feel the desire or need to specify at the moment." It looks like Vaniver, hoping that you would realize that the first usage is so obviously dumb that he wouldn't be intending it, used it to mean the second usage in order to save some typing time or brain cycles or something (I don't claim to know what particular standards he has in mind, but clearly standards that would be useful for "solving problems related to advancing human rationality and avoiding human extinction"). You interpreted it as the first anyways, even though it seemed to you quite obviously a bad idea to optimize for "good content" in that vacuous sense. Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification ("what do you have in mind when you say 'good content', that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?")

As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said.

"The case at hand" was your misunderstanding of Vaniver, not Benquo.


Hm. After writing this comment I notice I did something of the same thing to you. I interpreted your request for a numerical threshold literally, even though I considered it not only mistaken, but obviously so. Thus I retract my claim (at least in its strong form "any time your interlocutor says something that seems obviously mistaken, ask for clarification"). I continue to think that asking for clarification is often useful, but I think that, as with many things, there are few or no hard-and-fast rules for when to do so; rather, there are messy heuristics. If your interlocutor says something obviously mistaken, that's sometimes an indication that you should ask for clarification. Sometimes it's not. I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn't mean the vacuous interpretation of "good content." I think I probably don't need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I'm not really sure what to do about that right now, or whether and how to revise it.

EDIT: if it turns out you didn't mean it literally, then obviously I will know how I should revise my judgements (namely I should revise my judgement that I didn't need to ask you for clarification).

Replies from: Benquo, SaidAchmiz
comment by Benquo · 2018-09-06T15:12:27.217Z · LW(p) · GW(p)

Ikaxas, I would be strong-upvoting your comments here except that I'm guessing engaging further here does more harm than good. I'd like to encourage you to write a separate post instead, perhaps reusing large portions of your comments. It seems like you have a bunch of valuable things to say about how to use the interpretive labor concept properly in discourse.

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-06T23:43:07.409Z · LW(p) · GW(p)

Thanks for the encouragement. I will try writing one and see how it goes.

comment by Said Achmiz (SaidAchmiz) · 2018-09-06T05:25:05.509Z · LW(p) · GW(p)

Well, the second part of your comment (after the rule) pre-empts much of what I was going to say, so—yes, indeed. Other than that:

I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn’t mean the vacuous interpretation of “good content.” I think I probably don’t need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I’m not really sure what to do about that right now, or whether and how to revise it.

Yes, I think this seems like a rather self-serving set of judgments.

As it happens, I didn’t mean my question literally, in the sense that it was a rhetorical question. My point, in fact, was almost precisely what you responded, namely: clearly the threshold is not 100%, and also clearly, it’s going to depend on context… but that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.

Other points:

But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn’t wont to say obviously wrong things …

I have never met such a person, despite being surrounded, in my social environment, by people at least as intelligent as I am, and often more so. In my experience, everyone says obviously wrong things sometimes (and, conversely, I sometimes say things that seem obviously wrong to others). If this never happens to you, then this might be evidence of some troubling properties of your social circles.

In this particular case, it seems to me that “good content” could be vacuous, or it could be a stand-in for something like “content that meets some standards which I vaguely have in mind but don’t feel the desire or need to specify at the moment.”

That’s still vacuous, though. If that’s what it’s a stand-in for, then I stand by my comments.

Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification (“what do you have in mind when you say ‘good content’, that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?”)

Indeed, I could have. But consider these two scenarios:

Scenario 1:

Alice: [makes some statement]

Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?

Scenario 2:

Alice: [makes some statement]

Bob: That’s obviously wrong, because [reasons].

Alice: But of course [straightforward reading] isn’t actually what I meant, as that would indeed be obviously wrong. Instead, I meant [other thing].

You seem to be saying that Scenario 1 is obviously (!!) superior to Scenario 2. But I disagree! I think Scenario 2 is better.

… now, does this claim of mine seem obviously wrong to you? Is it immediately clear why I say this? (If I hadn’t asked this, would you have asked for clarification?) I hope you don’t mind if I defer the rest of my point until after your response to this bit, as I think it’s an interesting test case. (If you don’t want to guess, fair enough; let me know, and I’ll just make the rest of my point.)

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-08T19:07:08.512Z · LW(p) · GW(p)

I've been mulling over where I went wrong here, and I think I've got it.

that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.

I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there's some threshold or some clear rule for deciding when to ask for clarification, it's not worth implementing "ask for clarification if you're unsure" as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that's not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone's fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it's worth stopping to have one or both parties do something in the vicinity of trying to pass the other's ITT, to see where the confusion is.

I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I'm much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn't enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver's point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn't really entail that you ought to have asked for clarification here, in this very instance.

Anyway, as Ben suggested I'm working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I'll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I've been looking for a while and haven't been able to find it if there is.)

consider these two scenarios

I agree the model I've been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don't think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you're going with this is something like "scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way)."

If this is where you are going, I have a couple disagreements with it, but I'll wait until you've explained the rest of your point to state them in case I've guessed wrong (which I'd guess is fairly likely in this case).

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-08T20:58:33.848Z · LW(p) · GW(p)

My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”

Basically, yes.

The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.

How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)


Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.

By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:

Scenario 1a:

Alice: [makes some statement]

Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?

Alice: Wait, what? Why would that be obviously wrong?

Bob: Well, because [reasons], of course.

So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.


Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.


Scenarios 1 and 2 aren’t our only options. There is also…

Scenario 3:

Alice: [makes some statement]

Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].

Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.

There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.

Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)


Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.

This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-15T02:54:15.845Z · LW(p) · GW(p)

After quite a while thinking about it I'm still not sure I have an adequate response to this comment; I do take your points, they're quite good. I'll do my best to respond to this in the post I'm writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn't adequately address your points.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-15T02:56:17.502Z · LW(p) · GW(p)

Sounds good, and I am looking forward to reading your post!

comment by Said Achmiz (SaidAchmiz) · 2018-09-08T19:40:07.440Z · LW(p) · GW(p)

Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.

Indeed there is. You go to the All view or the Meta view, and click the green “+ New post” link at the upper-right, just below the tab bar. (The new-post link currently doesn’t display when viewing your own user page, which is an oversight and should be fixed soon.)

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-08T21:34:45.648Z · LW(p) · GW(p)

Ah, thanks!

comment by Benquo · 2018-09-05T02:11:40.444Z · LW(p) · GW(p)

Your disjunction is wrong.

Replies from: Ikaxas, SaidAchmiz
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-08T21:23:21.801Z · LW(p) · GW(p)

EDIT: oops, replied to the wrong comment.

comment by Said Achmiz (SaidAchmiz) · 2018-09-05T02:25:23.922Z · LW(p) · GW(p)

How?

Replies from: Benquo
comment by Benquo · 2018-09-05T02:32:47.251Z · LW(p) · GW(p)

Spurious binary between one way things really seem, and the many ways one might guess. Even the one way it seems to you is in fact an educated guess.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T02:41:52.140Z · LW(p) · GW(p)

That’s not a spurious binary, and in any case it doesn’t make the disjunction wrong. Observe:

Let P = “Alice meant exactly what it seems like she wrote.”

¬P = “It is not the case that Alice meant exactly what it seems like she wrote.”

And we know that P ∨ ¬P is true for all P.

Is “It is not the case that Alice meant exactly what it seems like she wrote” the same as “Alice meant something other than what it seems like she wrote”?

No, not quite. Other possibilities include things like “Alice didn’t mean anything at all, and was making a nonsense comment, as a sort of performance art”, etc. But I think we can discount those.

comment by Ben Pace (Benito) · 2018-09-04T20:05:03.454Z · LW(p) · GW(p)

First thoughts:

  • One thing I don't like about this proposal is - and you're hearing me right - it doesn't positively enough incentivise criticism.
  • In particular, there needs to be a place where when the idea is put to the test (to peer review), if someone writes a knock-down critique, that person is celebrated and their accomplishments are to be incorporated into their reputation in the community.
  • We want to have a place that both strongly incentivises good ideas, and strongly incentivises checking them - not disanalogous to how in Inadequate Equilibria the Visitor says on his planet the 'replicators' are given major prestige.
  • Because I want criticisms that look like this [LW · GW], not like this [LW(p) · GW(p)].
    • The first link is Zvi's thoughtful and well-written critique of a point made in Eliezer's "No Fire Alarm for AGI" post. This is good criticism that puts lots of effort into being clear to the reader, and is very well written. That's why we curated it and it got loads of karma.
    • The comments of yours that I don't like are things I would not want to find on almost any site successfully pursuing intellectual progress. They're not nice comments to receive, but they're also not very good criticism. Again, this isn't all of your comments, but it often feels to me like you're not engaging with the post very well (can't pass the author's ITT), or if your criticisms are true they're attacking side notes (like a step that wasn't rigorous, even though it was an unimportant step that wouldn't be hard to make rigorous). If you look the places of great intellectual progress in groups, you don't see that they reduced the effort barrier to criticism to the minimum, they increased the incentive for important criticism, that knocked down the core of an idea.
      • If your criticisms were written in a way that didn't feel like it was rude / putting a burden on the author that you're not willing to share, then that would be fine. If they were important (e.g. you were knocking down core ideas in the sequences, big mistakes everyone was making, or even just the central point of the post) then I would accept more blunt/rudeness. But when it's neither, then it's not good enough.

(I'm at 30 mins.)

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-04T20:56:50.909Z · LW(p) · GW(p)

(I’m at 30 mins.)

Honestly, this is just insulting. I don’t know if you intended it that way, but this does an excellent job of discouraging me from engaging.

Replies from: gjm, Benito
comment by gjm · 2018-09-05T08:02:17.185Z · LW(p) · GW(p)

For what it's worth, I bet the intention was as follows: Ben had mentioned that he was going to ration his time in this thread for fear of rabbit-holes, he thought you might prefer to have some idea how much more Said-Ben discussion was possible, and so (the amount of time he'd spent not being immediately visible) he added that note. So, exactly the result of insulting intent.

comment by Ben Pace (Benito) · 2018-09-04T21:05:01.986Z · LW(p) · GW(p)

I didn't intend it that way. I will not write them further and will keep them privately.

Replies from: Benquo
comment by Benquo · 2018-09-05T18:11:02.063Z · LW(p) · GW(p)

If Said is insulted by your clarity about how much time you're investing in interpretive labor, then I think this is evidence that Said's sense of offense is not value-aligned with good discourse. If someone put a note like that on a response to a comment by me, I'd feel like they were making an effort to be metacooperative. 30 minutes is a long time for a single comment!

comment by Said Achmiz (SaidAchmiz) · 2018-09-04T20:59:53.366Z · LW(p) · GW(p)

If your criticisms were written in a way that didn’t feel like it was rude /​ putting a burden on the author that you’re not willing to share, then that would be fine. If they were important (e.g. you were knocking down core ideas in the sequences, big mistakes everyone was making, or even just the central point of the post) then I would accept more blunt/​rudeness. But when it’s neither, then it’s not good enough.

As I’ve commented [LW(p) · GW(p)], the point in that comment went to the heart of my objection not only to this post, but to a great many posts that are similar to this one along a critically important axis. I continue to be dismayed by the casualness with which this concern has been dismissed, given that it seems to me to be of the greatest importance to the epistemic health of Less Wrong.

comment by Benquo · 2018-08-27T04:10:10.362Z · LW(p) · GW(p)

What's your best guess as to what I meant here?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-27T04:25:50.433Z · LW(p) · GW(p)

I assumed you meant what you wrote. It does not seem mysterious or confusing, just contradictory. (If you meant something other than what you wrote, well, I guess you’ll want to clarify.)

In case my point was obscure—time travel stories aren’t real, and Robinson Crusoe is also fictional. The sorts of skills that fictional characters use in stories like this, bear, for the most part, little resemblance to the sorts of skills that ordinary, real people use to navigate their ordinary, real lives.

Your yeast example is an excellent example of basically this very point.

Replies from: Benquo
comment by Benquo · 2018-08-27T14:14:08.618Z · LW(p) · GW(p)

Time travel and Robinson Crusoe stories (and zombie apocalypse stories etc) tend to make the assumption that if you "know about" X you can reinvent it from scratch. This implies a standard of knowledge such that you are competent to interact with the thing, and with its precursors, or at least have an idea of how you'd learn to do so. What my mother had learned about yeast in school was completely inadequate for that, but my explanation here is adequate. Learning what yeast is by learning things like its cell structure, scientific name, etc, doesn't give you a critical piece of information about how it exists in the physical world you navigate - that it's *already on the flour*.

Replies from: Pattern, SaidAchmiz
comment by Pattern · 2018-08-27T18:28:52.592Z · LW(p) · GW(p)

Like "Truly a part of you [LW · GW]" except for material production.

comment by Said Achmiz (SaidAchmiz) · 2018-08-27T18:58:56.251Z · LW(p) · GW(p)

What my mother had learned about yeast in school was completely inadequate for that, but my explanation here is adequate.

No. Your explanation here is definitely, definitely not adequate.

And the reason you are able to deceive yourself about this, is that—again—such “ reinvent it from scratch” scenarios are totally fictional. You haven’t actually had to reinvent the yeast that we buy in a supermarket from scratch. Like the time travel story and like Robinson Crusoe, all you’ve had to optimize your explanation for is “this makes for fun reading”, not “this actually works”.

Replies from: Vaniver, Bastian Sommerfeld
comment by Vaniver · 2018-08-30T05:07:17.760Z · LW(p) · GW(p)
No. Your explanation here is definitely, definitely not adequate.

On the object level, I read Benquo's "that" as referring to "make sourdough starter using flour and the wild yeast already present in the flour," which in fact this post is sufficient for (because it points out that you can just leave out the dough and it will attain sourdough-nature).

The post isn't called "yeast," though, it's called "zetetic explanation," and is about how explanations try to hook into ontologies. The variation that it's trying to point out--that some explanations are trying to talk about underlying generators while other explanations are trying to talk about ritual behavior--seems real, though of course explanations in the wild will cover many such levels.

What's not clear to me is... whether you see the dimension that Benquo is trying to point at, and what specifically you're trying to fault him for? Like, yes, obviously this post does not contain every fact about yeast. As mentioned in the post:

Of course, it can be hard to know where to stop in such explanations, and it can also be hard to know where to start. This post could easily have been twice as long.

Which implies that at least half of it, in some sense, is 'left out.' But is Benquo's explanation trying to hook into people's generators, such that their map of the world has more counterfactual potency than it did before, or is it merely trying to present people with additional rituals they can perform?

The reason that Robinson Crusoe is brought up is not that time travel stories are 'real' or part of 'the ordinary means by which people navigate their lives.' It's because there's variance in 'the ordinary means by which people navigate their lives,' with some relying heavily on generators and others relying heavily on rituals, and time travel stories expose the difference, as the person who relies on ritual loses their ritual whereas the person who relies on generators does not lose their generators. A while ago, some housemates were attempting to mash potatoes, but didn't have a potato masher (as had existed in their childhood kitchen), and were despairing at doing so with a fork. "Use a glass," I said, demonstrating once. This wasn't a potato-mashing ritual I had inherited from someone else, but querying my tool generators to find something in the kitchen that was better at mashing that volume of potatoes than a fork. And they seemed somewhat impressed that I could immediately handle their problem and embarrassed that they hadn't, especially because of how clearly it connected to the dimension that Benquo is gesturing towards here.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-30T05:49:33.648Z · LW(p) · GW(p)

What’s not clear to me is… whether you see the dimension that Benquo is trying to point at, and what specifically you’re trying to fault him for?

I answered this question at some length in this comment [LW(p) · GW(p)].

is Benquo’s explanation trying to hook into people’s generators, such that their world has more counterfactual potency than it did before

I confess that this terminology (hooking into generators, counterfactual potency) is unfamiliar to me, so I can’t really answer this. Is there some place where these terms/concepts are explained?

there’s variance in ‘the ordinary means by which people navigate their lives,’ with some relying heavily on generators and others relying heavily on rituals, and time travel stories expose the difference, as the person who relies on ritual loses their ritual whereas the person who relies on generators does not lose their generators

But this isn’t right, is it? Rather, the author of the story describes the person who “relies on ritual” as losing their ritual, and the author of the story describes the person who “relies on on generators” as not losing their generators.

In other words: the quoted part of your comment (and similar sentiments) only make sense as an instance of “generalizing from fictional evidence”.

What does it tell us about reality, that people in time-travel stories who behave in certain ways, get certain results? Not much, I’d say, except that science-fiction authors imagine certain hypothetical scenarios in a certain way, or that readers prefer to read certain sorts of stories, etc.

A while ago, some housemates were attempting to mash potatoes, but didn’t have a potato masher (as had existed in their childhood kitchen), and were despairing at doing so with a fork. “Use a glass,” I said

Indeed, or a whisk, or a wooden spoon, or a hand mixer. I’ve had a number of similar experiences, myself (for example, I once improvised a double boiler with a sauté pan, a saucepan, and a length of string).

But what does this have to do with the OP? It does not seem to me like your cleverly practical solution to the problem of mashing potatoes had to draw on a knowledge of the history of potato-mashing, or detailed botanical understanding of tubers and their place in the food chain, or the theoretical underpinnings of the construction of kitchen tools, etc.

Replies from: Vaniver, Raemon
comment by Vaniver · 2018-08-30T22:42:41.005Z · LW(p) · GW(p)
I answered this question at some length in this comment [LW(p) · GW(p)].

I had read that comment before I wrote the grandparent, and it still wasn't clear to me.

That is, it seems to me like your second category of explanation (the "practical, specific, and circumscribed" type) matches on to the ritual behavior explanation, but your first category of explanation (the "Big History" type) comes packaged with some standard of completeness (or something else?) that is less clear to me. And so the question remains if the dimension that Benquo is pointing at with "zetetic" is meaningful, and whether the objection (that I understand as "an explanation that attempts to be zetetic but is too small is 'insight porn' and doesn't actually achieve the benefits of Big History") is central.

[That is, it may well be that while walls are useful, individual bricks are not, and so a strategy of accumulating bricks is useless without also having some focus on architecture, and so it is not particularly useful to highlight the skill of brick-accumulation. I hope it is also clear how the original comment in this chain is not a good pointer towards this presentation of a way in which Benquo's post could be deeply mistaken.]

But this isn’t right, is it?

I apologize if this response seems overly basic, because I'm genuinely uncertain where the miscommunication is happening here, and so am attempting to cover lots of possibilities.

One hypothesis that seems too simplistic to be right is something like "Said is generally skeptical of counterfactual reasoning." That is, suppose I perform some behavior (like mashing potatoes with a glass), and we then discuss what would have happened if instead I performed a different behavior (like mashing potatoes with a whisk). Perhaps we have to move from language like "true" and "false" to language like "consistent" and "inconsistent," but it seems to me that there's value in considering statements like "If I had tried to mash potatoes with the whisk in my kitchen, it wouldn't have worked any better than the fork" and value in statements like "that would have been so because the whisk's tines are thin and easily deformed, enough so that they would be overpowered by the potatoes."

Now, those are statements about models; we have to modify them to get predictions about reality. For the first one, I have to cash it out in terms of subjective experience during a future test; for the second one, the connection is even less direct, because it's not just about a future experimental result but what changes to the experimental setup would produce different results. In addition, because they're statements about models, they have a truth-value of sorts that's different from the experimental results (the statement 'Vaniver believes X' can well be true even if X is itself false, and the statement 'X is consistent with Y' can again be true even if X is itself false).

The thing that the time-travel story is doing is not delivering experimental results, because we can't actually send a scientist and a ritualist back in time and determine what consequences would result. The thing that time-travel stories are doing is proposing experiments that are impossible in reality but accessible with models.

That is, suppose I say "Comparing two people transported from our modern culture to culture A, I think a scientist would be better at surviving than someone who has less understanding of how modern culture is put together or the individual work of understanding and creating culture." It seems to me like you have many responses, ranging from "I agree, because X" to "I disagree, because Y" to "I don't think this question is resolvable" to "I don't think this question is interesting." It seems to me like there's interesting material in the first two responses, even if the third response is in fact valid.

A potentially absurd example: it seems to me like there's a consistent view in which the mathematical technique of proof by contradiction is classed with "generalizing from fictional evidence." Suppose I am trying to convince Alice that the square root of 2 is a irrational number; I start by saying "suppose it is rational," step through the argument, and then derive a contradiction. "Therefore," I conclude, "it is irrational." Alice replies with "wait, but this conclusion depends on an argument whose premise is false; it seems exceedingly dangerous to allow [arguments whose premises are false] as valid operations in your logic." How do I convince Alice that proof by contradiction is valid in a way that generalization from fictional evidence is not?

[Once I have such an argument, which perhaps rests on a distinction between various kinds of fictional evidence or a crisper definition of 'fictional,' can I generalize that argument to this scenario, not necessarily to rescue time travel stories specifically, but to rescue something adjacent to time travel stories?]

But what does this have to do with the OP? It does not seem to me like your cleverly practical solution to the problem of mashing potatoes had to draw on a knowledge of the history of potato-mashing, or detailed botanical understanding of tubers and their place in the food chain, or the theoretical underpinnings of the construction of kitchen tools, etc.

It has to do with the distinction between generators of ritual and rituals, or the invention of tools and the use of tools. The central claims of the OP (as I understand it) are:

1. There is a distinction between using tools and inventing them.

2. This distinction is reflected in explanations, as explanations vary in how much they improve the use of tools and how much they improve the invention of tools.

3. The structure and content of an explanation is linked to how that explanation varies on those dimensions, and that there is a style of explanation that focuses on whys and connections that results in more improvement along the dimension of invention of tools.

4. Many explanations are solely judged on how well they improve tool use in a narrow dimension (because, perhaps, this is the only thing that can be verifiably tested) and one should expect this to lead to explanations that are deficient at improving tool invention.

5. It is desirable to have the ability to invent tools as well as just use them.

[I expect this presentation to be slightly unsatisfying to Benquo, because explanations aren't just about using them; and so 'tools' are a bit too narrow, but are perhaps easier to see than the real thing.]

That example only engages with some of the points; it's a demonstration of 1 and 5 more than it is 2, 3, or 4. (It's implicitly an example of 4 in that this is clearly a one-off test; if we want to measure my ability to invent tools, we can't really ask me to mash potatoes three times in a row, or verify that techniques I generate are original instead of copied.) It also seems important that, in worldview of the OP, my action is not simply "clever" but has a more detailed description of what mental operations led to the new behavior, such that it could perhaps be transferred.

I agree that there's some real controversy or discernment involved with 3; a detailed botanical understanding of tubers seems unlikely to help, unless it did something squishy like cement a self-narrative as a tool inventor such that my mind even bothered to spend calories looking for a better way to mash potatoes, or it's the case that a policy of seeking out a connected understanding of the world led to both knowing the botanical understanding of tubers and the ability to improvise a potato masher. And it seems easy to look at botanical understandings of tubers and see some of them as more or less connected to the navigation of lives (and thus likely more or less useful for the invention of tools related to tubers).

Replies from: Benquo, SaidAchmiz, SaidAchmiz, SaidAchmiz
comment by Benquo · 2018-08-31T21:53:27.605Z · LW(p) · GW(p)

I basically agree with your summary of my central claims, and think your treatment of the subject deserves at least a separate comment and ideally a separate post. One thing that's more obvious to me reading your comment is the extent to which my post is a praise of episteme done right over metis (including metis about pretending episteme).

Thanks for the interpretive labor you're doing, by the way - I'm constrained by the fact that I'll naturally feel defensive when someone's somewhat rudely telling me that I'm talking nonsense, so it's helpful for you to step in as a third party & try to bridge the gap here.

comment by Said Achmiz (SaidAchmiz) · 2018-08-30T23:19:23.840Z · LW(p) · GW(p)

(The parent is a long comment and makes several points, so I’m going to answer it in several parts. This is part 1.)

That is, it seems to me like your second category of explanation (the “practical, specific, and circumscribed” type) matches on to the ritual behavior explanation

I object in the strongest terms to characterizing this sort of explanation and this sort of knowledge as “ritual behavior”. In fact, not only does it constitute real understanding of the problem at hand (and the problem domain in general)—the kind of understanding that lets you accomplish real-world goals, and improvise, and predict the outcomes of processes and of actions, etc.—but it almost always constitutes a greater and a deeper understanding that the sort of explanation which tries to be more broad, more “from first principles”, more interdisciplinary, etc.

but your first category of explanation (the “Big History” type) comes packaged with some standard of completeness (or something else?) that is less clear to me

What I was saying, there, was that to achieve anything resembling real understanding in this other sort of way, you have to have both depth and breadth; you have to reach across domains and across contexts, and you have to understand each thing you encompass in some detail. “Completeness” isn’t quite right… but perhaps it’s close.

And so the question remains if the dimension that Benquo is pointing at with “zetetic” is meaningful, and whether the objection (that I understand as “an explanation that attempts to be zetetic but is too small is ‘insight porn’ and doesn’t actually achieve the benefits of Big History”) is central.

The problem is not with explanations which attempt to be of the first sort I describe, but are too small (although that, too, is a, problem—just not the problem). The problem is with explanations which attempt to be of the first type, but also, at the same time, attempt to be of the second type. That doesn’t work. That is what gets you insight porn.

Insofar as what Benquo is pointing at is some purported dimension of variation such that moving in one direction along that dimension gets you explanations that are both more like the ones found in The Adapted Mind, and also more like the ones found in the Dessert Bible, that dimension is not meaningful, and viewing explanations or knowledge through this lens is actively harmful.

Replies from: Vaniver
comment by Vaniver · 2018-08-31T21:30:10.022Z · LW(p) · GW(p)
I object in the strongest terms to characterizing this sort of explanation and this sort of knowledge as “ritual behavior”.

I'm not using 'ritual' as a term of abuse, here; someone pressing CTRL-C to copy some text is engaging in 'ritual behavior.'

it almost always constitutes a greater and a deeper understanding that the sort of explanation which tries to be more broad, more “from first principles”, more interdisciplinary, etc.

It's now clear that you're talking about quite different dimensions of variation.

[This is just responding to points that are easy to respond to contained in the parent; my overall sense is "it might take a post to point at what's going on here, and so I'm going to try to write that post instead of handle it here."]

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-31T21:50:11.137Z · LW(p) · GW(p)

I’m not using ‘ritual’ as a term of abuse, here; someone pressing CTRL-C to copy some text is engaging in ‘ritual behavior.’

Then, I confess, I haven’t the first idea just what you mean by “ritual behavior”. Either your usage of the term is so broad as to be meaningless, or… I don’t know what. In either case, you’re diverging from common usage, and I can’t really respond to your points.

I certainly hope that you manage to write that post! When you do, I’d ask that you take some time to explain what you have in mind when you speak of “ritual behavior” (and it might be prudent to consider alternate terminology, to avoid a namespace collision).

Replies from: Vaniver
comment by Vaniver · 2018-08-31T23:14:50.735Z · LW(p) · GW(p)
In either case, you’re diverging from common usage, and I can’t really respond to your points.

It's behavior by rite instead of by model; stated another way, "behavior motivated by past experience" but that doesn't quite cleave things at the joints. In particular, it's not exclusive with "behavior motivated by models"--perhaps a better reference is something like "autopilot," but the dominant feature of autopilot is lack of attention, which is only weakly related.

An example that comes to mind is when I tried to switch keyboard layouts, I discovered that I had two modes of typing--the unconscious knowledge of where all the qwerty keys were, that I could access effortlessly without even having a physical keyboard, and the conscious knowledge of where all the qgmlwy keys were, which I could access only through deliberate thought and careful muscular control. Even though I 'knew' which keyboard layout I was using, and 'knew' where every key was now, I also 'knew' that if I wanted to type an 'a' I used the left pinky instead of the right index finger. (After a few weeks of typing at 7 wpm, I gave up and stuck with qwerty.)

The relevance here is that what Benquo calls 'functional' explanation ("how they ought to interface with it right now") is basically targeted at creating the right behavior without any judgment or interest in the resulting mental changes. It doesn't matter to me how the person who wants to copy and paste text thinks about it; it just matters to me that they press the right keys to accomplish the goal.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-31T23:29:39.464Z · LW(p) · GW(p)

I’m sorry to say that this explanation makes very little sense to me. I don’t know if there’s inferential distance here, or true disagreements about the world, or what. I think that this is another point which might benefit from a post-length discussion!

Replies from: Ikaxas
comment by Vaughn Papenhausen (Ikaxas) · 2018-09-05T20:35:28.983Z · LW(p) · GW(p)

There's a lot going on in this thread, so I'm not sure exactly where this response best belongs, so I'll just put it here.

In this comment [LW(p) · GW(p)] Vaniver wrote:

some explanations are trying to talk about underlying generators while other explanations are trying to talk about ritual behavior

I think I have some idea of what he was trying to say here, so let me try to interpret a bit (Vaniver, feel free to correct if anything I say here is mistaken).

There are two kinds of explanation (there are obviously more than two, but among them are these):

The first kind is the kind where you're trying to tell someone how to do something. This is the kind of explanation you see on WikiHow and similar explanation sites, in how-to videos on YouTube, etc. In the current case, this would be something like the following

How to make a sourdough starter: Step 1: Add some flour to some water Step 2: Leave out for a few days, adding more water and flour as necessary Step 3: And there you have a sourdough starter.

This is the kind of explanation Vaniver was referring to as "merely trying to present people with additional rituals to perform." I think a better way to describe it is that you're providing someone with a procedure for how to do something. [Vaniver, I'm somewhat puzzled as to why you used the word "ritual" rather than "procedure," when "procedure" seems like the word that fits best? Is there some subtle way in which it differs from what you were trying to say?]. I'll call it a "procedural explanation."

The second kind may[1] also include telling someone a procedure for how to do something (note that Benquo's explanation did, in fact, provide a simple procedure for making a sourdough starter). But the heart of this type of explanation is that it also includes the information they would have needed in order to discover that procedure for themselves. This is what I take Benquo to be referring to when he says "zetetic explanation." When Vaniver uses the word "generators" in the quote above (though not necessarily in other contexts--some of his usages of the word confuse me as well) I think it means something like "the background knowledge or patterns of thought that would cause someone to think the thought in question on their own." A couple examples:

  1. The generators of the procedure for the sourdough starter were something like:[2]
  • On its own, grain is hard to digest
  • There are microbes on it that can make it easier to digest
  • If you create an environment they like living in, you can attract them and then get them to do things to your dough that make it easier to digest
  • They like environments with flour and water This is the kind of information that would lead you to be able to generate the above procedure for making a sourdough starter on your own.
  1. In this comment [LW(p) · GW(p)] I make the point that I, and perhaps some of the mods, believe that communication is hard and that this leads me (us?) to think that people should probably put in more effort to understand others and to be understood than might feel natural. I could just as easily say that the generator of the thought that [people should probably put in more effort to understand others and to be understood than might feel natural] is that [communication is hard], where "communication is hard" stands in for a bunch of background models, past experiences, etc.
  2. Vaniver's example with mashing potatoes. The "ritual" or "procedure" that his friends had was "get the potato masher, use it to mash the potatoes." But Vaniver had some more general knowledge that enabled him to generate a new procedure when that procedure failed because its preconditions weren't in place (i.e. there was no potato masher on hand). That general knowledge (the "generators" of the thought "use a glass," which would have allowed his friends to generate the same thought had they considered them) was probably something like:
  • Potatoes are pretty tough, so you need a mashing device that is sufficiently hefty
  • A glass is sufficiently hefty

But what does [the potato-mashing story] have to do with the OP? It does not seem to me like your cleverly practical solution to the problem of mashing potatoes had to draw on a knowledge of the history of potato-mashing, or detailed botanical understanding of tubers and their place in the food chain, or the theoretical underpinnings of the construction of kitchen tools, etc.

The history is not necessarily the important part of the "zetetic explanation." Vaniver's solution didn't have to draw on the "detailed theoretical underpinnings of the construction of kitchen tools," but it did have to draw on something like a recognition of "the principles that make a potato masher a good tool for mashing potatoes."

I think the important feature of the "zetetic explanation" is that it** gives the generators as well as just the object-level explanation**. It connects up the listener's web of knowledge a bit--in addition to imparting new knowledge, it draws connections between bits of knowledge the listener already had, particularly between general, theoretical knowledge and particular, applied/practical/procedural knowledge. Note that Benquo gives Feynman's explanation of triboluminescence as another example. This leads me to believe the key feature of zetetic explanations isn't that they explain a procedure for how to do something plus how to generate that procedure, but that they more generally connect abstract knowledge with concrete knowledge, and that they connect up the knowledge they're trying to impart with knowledge the listener already has (I've been using the word "listener" rather than "reader" because, as Benquo points out, this kind of explanation is easier to give in person, where they can be personalized to the audience ). The listener probably already knows about sugar, so when Feynman explains triboluminescence he doesn't just explain it in an abstract way, he tells that it applies to sugar so that you can link it up with something you already know about.

On one way of using these words, you might say that a zetetic explanation doesn't just create knowledge, it creates understanding.

As I say [LW(p) · GW(p)], communication is hard, so it's possible that I've misinterpreted Benquo or Vaniver here, but this is what I took them to be saying. Hope that helped some.


[1] note that, as I mention near the end of the comment, there might be zetetic explanations of things other than procedural explanations. Not sure if Benquo intended this, but I think he did, and I think in any case that it is a correct extension of the concept. (I might be wrong though--Benquo might have intended zetetic explanations to be explanations answering the question "where did X come from?" But if that's the case then much of my interpretation near the end of the comment is probably wrong)

[2] I actually think you're right [LW(p) · GW(p)] that Benquo's explanation doesn't fully give the generators here (though as Vaniver says [LW(p) · GW(p)], "half of it is, in some sense, 'left out'"), so I don't claim that the generators I list here are fully correct, just that it would be something like this.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-09-05T21:19:10.727Z · LW(p) · GW(p)

There’s a lot to take in here, and I may post further replies after I’ve had a chance to re-read your comment a couple of times and give it some thought. However, for now, I do have one quick observation to make:

Vaniver’s example with mashing potatoes. The “ritual” or “procedure” that his friends had was “get the potato masher, use it to mash the potatoes.” But Vaniver had some more general knowledge that enabled him to generate a new procedure when that procedure failed because its preconditions weren’t in place (i.e. there was no potato masher on hand). That general knowledge (the “generators” of the thought “use a glass,” which would have allowed his friends to generate the same thought had they considered them) was probably something like:

  • Potatoes are pretty tough, so you need a mashing device that is sufficiently hefty

  • A glass is sufficiently hefty

This is not an accurate account of Vaniver’s example! Let’s analyze the error:

“Potatoes are pretty tough” is, of course, wrong! Before mashing potatoes, you coarsely dice and boil them; at this point, they are not tough at all, but are quite soft—soft enough to come apart in your hand, too soft to even handle without the pieces breaking apart!

Thus, “you need a mashing device that is sufficiently hefty” is also not true. Heft, in fact, has nothing whatever to do with the reason why a fork was a poor tool, and a glass was a better tool.

What does, then? It’s a matter of shape; a fork is problematic because it has much less surface area to transmit the force of your hand to the potatoes, and because the fork’s mashing surface (such as it is) is not perpendicular to its primary axis (i.e., its handle), it is very awkward to bring it to bear on the potatoes in the pot or saucepan.

A glass, on the other hand, has a nice big surface area—the bottom—and, because it’s simply a cylinder, that surface area can easily be brought to bear on the potatoes, without the pot/saucepan interfering.[1]

Do you see? You constructed an explanation which was totally wrong; and not just wrong, but wrong in a way that (a) would become apparent if you actually went out and did the activity being described, and (b) the wrongness of which is not even difficult to see by thinking about—ask yourself, “what if Vaniver’s friends had used a fork with a core of solid lead, but otherwise the same shape as a regular fork? and what if instead of a glass made of… glass, they used a glass made of a lightweight plastic? would the rank ordering of these tools’ applicability to potato-mashing thereby reverse?”

I don’t mean to come down hard on you for this; it’s an error which I don’t think there’s any good reason to expect you not to have made. But that’s my whole point. It’s very easy to deceive oneself that one has a good “generator” or a good “zetetic explanation” or a good what have you, when in reality what one has is just wrong. It’s not a big problem when the explanation is about mashing potatoes; if your explanation encounters reality and is instantly shattered, well, big deal, right? You live and you learn… but when the matter is more serious than that, relying on such “knowledge” is tremendously dangerous.

[1] Indeed, there exist potato mashers which are simply cylinders—not of glass, but of wood—with coaxial handles attached.

comment by Said Achmiz (SaidAchmiz) · 2018-08-31T05:49:41.672Z · LW(p) · GW(p)

(The parent is a long comment and makes several points, so I’m going to answer it in several parts. This is part 3.)

The central claims of the OP (as I understand it) are:

So, first all, it’s not clear to me that this is a good summary of the OP (in the sense that—it seems to me—it adds your interpretation to it, rather than representing the post directly). That being said, I’m not Benquo, so perhaps this is indeed what he meant. But regardless of any of that, let me go ahead and respond to these claims point by point:

  1. There is a distinction between using tools and inventing them.

This is rather vague. A distinction? What is the nature of this distinction, exactly? I mean, this claim seems trivially true, in some sense, but I’m not sure how important it’s supposed to be, or how fundamental, etc. Elaborations (with details and examples) would help.

  1. This distinction is reflected in explanations, as explanations vary in how much they improve the use of tools and how much they improve the invention of tools.

So, first of all, it seems to me like either there’s an implication that all human activity can be placed into one of these two categories (“using tools” vs. “inventing tools”). Or perhaps it’s only all human activity of some specific type? If so, what type would that be?

Or, otherwise, the question arises: do some explanations do things other than “improve the use of tools” or “improve the invention of tools”—either instead of, or in addition to? What other things might those be?

Also, is there any correlation between how much any given explanation improves the use of tools vs. how much it improves the invention of tools? Or is this a linear spectrum? Or are these totally orthogonal dimensions? And if there is a correlation, what is its causal origin? (And are these categories even sensible?)

And another question: how does the domain-specificity of an explanation interact with the degree to which “does this improve the use of tools” and “does this improve the invention of tools” even make sense as questions to ask?

  1. The structure and content of an explanation is linked to how that explanation varies on those dimensions, and that there is a style of explanation that focuses on whys and connections that results in more improvement along the dimension of invention of tools.

This seems much too vague a claim for me to say anything more about it than what I’ve said above, re: #2.

  1. Many explanations are solely judged on how well they improve tool use in a narrow dimension (because, perhaps, this is the only thing that can be verifiably tested) and one should expect this to lead to explanations that are deficient at improving tool invention.

There are actually several claims in here, which must be untangled before they can be addressed. My questions re: #2 seem like reasonable first steps toward untangling this question also.

  1. It is desirable to have the ability to invent tools as well as just use them.

This is difficult to evaluate. Negate it, and we have:

“It is undesirable to have the ability to invent tools; we should only be able to use tools, not invent them.”

I have trouble imagining who would ever endorse this claim, so #5 is either an applause light… or, it’s an oblique way of suggesting that we should move further in some direction, on some purported spectrum.

Taken that way, we might interpret it as saying that we (for some value of “we”) currently have insufficient ability to invent tools, and should have more. In which case, it seems necessary to make, and defend, an explicit, positive claim about where on this purported spectrum we currently are, as well as a normative claim about where we ought to be. (The prerequisite for all of this, of course, would be establishing the structure of the purported spectrum in the first place, as I comment on above.)

That example [with the potatoes —SA] only engages with some of the points; it’s a demonstration of 1 and 5 more than it is 2, 3, or 4.

It is not a coincidence that #1 and #5 are, as I say above, the least interesting and most trivial of the claims (at least when taken at face value).

As an aside—though this is not (I think) terribly relevant to the OP—it does not seem to me like the “using tools vs. inventing tools” dichotomy (of which I am still rather skeptical) is all that natural a fit for characterizing your example with the potato mashing. (After all, you didn’t actually invent a new tool!) One could also describe it in some sort of “outside-the-box thinking” terms, or perhaps in terms of some sort of “analytical skills”[1], or perhaps in terms of “seeing the artificiality of purposes”[2], etc. We could have endless fun with the game of inventing plausible paradigms with which to describe this bit of clever thinking on your part… but I do not think the exercise would gain us any useful understanding.

[1] In a fairly literal sense of the word: we might say, perhaps, the you analyzed the act of using a potato masher to mash potatoes—that is, you decomposed the act into its constituent parts, such as “exerting pressure on the potatoes to crush them and deform their structure, breaking them up” and “using an instrument shaped so as to allow downward pressure to be exerted over a wide area”, etc.; and that this analysis, this skill of breaking-down, is what allowed you to synthesize your clever solution.

[2] Namely, of course, the fact that “the glass is for drinking out of” is not a property of the glass, and that the glass simply is the specific physical object that it is; it has no little XML tag attached, where its purpose is stored. We might, perhaps, claim that a keen sense of the artificiality of purpose is what allows one to perceive the possibility of unorthodox uses for designed artifacts.

Replies from: Vaniver
comment by Vaniver · 2018-08-31T22:16:45.859Z · LW(p) · GW(p)
So, first of all, it seems to me like either there’s an implication that all human activity can be placed into one of these two categories (“using tools” vs. “inventing tools”). Or perhaps it’s only all human activity of some specific type? If so, what type would that be?

I meant the latter, and the answer is going to be unsatisfying: the type is "using or inventing tools." Specifically, note that "inventing" is a subcategory of "using," worth separating out because it uses cognitively distinct labor (like the sort you refer to as 'analyzing'). Then the questions are "okay, what fuels that cognitively distinct labor? What explanations make people better at analyzing?".

This also seems related to all of the other questions you raise in this subsection, where it seems like you're trying to expand the claim (consider the difference between "A and B are different" and "are you implying that everything is A or B?", and consider the difference between "there are two dimensions" and "what are the statistical properties of the real world along those dimensions?"). I am sort of torn on this (as conversational technique), because it seems useful at exploring the thing, but also changes the direction of the conversation in a way that increases feelings of friction, or something.

In particular, I have only vague opinions on the statistical properties of explanations in the wild, and so the question puts me in something of a bind where either I share my vague opinions (which, if they get expanded, means I am even further aground, and if they get contradicted, the general point may be lost in the controversy) or somehow dismiss the question (which has its own share of drawbacks), and this bind is an example of the sort of friction this sort of thing can generate.

In which case, it seems necessary to make, and defend, an explicit, positive claim about where on this purported spectrum we currently are, as well as a normative claim about where we ought to be.

In the OP, the examples of this are Benquo's mother and Benquo, in the context of yeast. In general, I don't think I agree that such claims need to be explicit, and it's not obvious to me that you have the right standards for 'defend.'

I expect to be able to broadcast advice on how to lose weight and trust that readers have their own sense of their weight and their own sense of their desired weight and their own sense of their other tradeoffs, such that they judge my advice accordingly, rather than requiring that any advice on how to lose weight come packaged with disclaimers about anorexia and a discourse on what sort of measurements are actually connected to anything good. Those things should exist somewhere, of course--the culture should have 'anorexia' as a concept and discuss it sometimes, and should have concepts for things like bodyfat percentage--but requiring not just that they exist everywhere but also that anyone who touches on any part of the topic be an expert on the whole topic dramatically limits what can be said.

Connecting back: the discussion of leavening here [LW(p) · GW(p)] reads to me as something like "don't pretend that you know what leavening is in my presence, you ignoramus!", which is a move that is occasionally sensible to make (if, say, Benquo claimed to know more about yeast than Said does, a battle of facts to settle the matter seems appropriate) but seems out of place here. [The reasons why it seems out of place here touch on controversies that are long to get into. I am not making a generic "this chills speech and that's bad" argument as some speech should be chilled; I am instead making a claim of the form "this way to divide chilled and unchilled speech doesn't line up with the broader goals of advancing the art of human rationality."]

[This is just responding to points that are easy to respond to contained in the parent; my overall sense is "it might take a post to point at what's going on here, and so I'm going to try to write that post instead of handle it here."]

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-31T23:26:12.932Z · LW(p) · GW(p)

I don’t really follow most of what you say here about types of explanations, conversation directions, and feelings of friction, so I won’t respond to that part. Perhaps the post you mean to write will clarify things.

Concerning this bit specifically:

… requiring not just that they exist everywhere but also that anyone who touches on any part of the topic be an expert on the whole topic dramatically limits what can be said.

I think it’s often good to dramatically limit what can be said. I think that there are many cases where most of what can be said—in the absence of such limits—is nonsense; and we should try our outmost to ensure that nonsense cannot be said, and only not-nonsense can be said.

Connecting back: the function of the discussion of leavening here [LW(p) · GW(p)], reads to me as something like “don’t pretend that you know what leavening is in my presence, you ignoramus!“, which is a move that is occasionally sensible to make (if, say, Benquo claimed to know more about yeast than Said does, a battle of facts to settle the matter seems appropriate) but seems out of place here.

If, indeed, it is the case that Benquo doesn’t know what leavening is, then this is an excellent reason to distrust what he says in the OP. In that hypothetical case (as, absent a reply from Benquo, we still do not know whether it is the case in reality), the fact that it was in my presence that he pretended to that specific knowledge which he does not possess is fortuitous, as it allowed that lack of knowledge to be pointed out, and thus gave all readers the opportunity to reduce their credence in the OP’s claims. (One of the reasons why I so vehemently oppose insight-porn type posts is due to the Dan Brown phenomenon, especially combined with the “Gell-mann amnesia” effect: something can sound very much like real knowledge/expertise, but without an actual expert on hand to verify it, how do we know it isn’t just a pile of nonsense? We don’t, of course. See also Scott’s old post about epistemic learned helplessness, which talks about how explanations can be very convincing while being total nonsense.)

[The reasons why it seems out of place here touch on controversies that are long to get into. I am not making a generic “this chills speech and that’s bad” argument as some speech should be chilled; I am instead making a claim of the form “this way to divide chilled and unchilled speech doesn’t line up with the broader goals of rationality.“]

Well, as you say: some speech should be chilled. But I look forward to your more detailed commentary on this matter.

Replies from: habryka4
comment by habryka (habryka4) · 2018-09-01T00:19:00.636Z · LW(p) · GW(p)

I appreciate the writing and clarity, but also (as we've gone over in a few past discussions) disagree on the object level. I ended up downvoting this comment because I think the position it defends is potentially quite damaging on a norms level, but wanted to make it clear that I do not disagree with the phrasing, method of communication, or fact that this comment was written.

(Of course, this highlights problems with karma serving multiple purposes that are sometimes at odds, which I am aware of and would still like to fix, but for now we have what we have)

comment by Said Achmiz (SaidAchmiz) · 2018-08-30T23:46:36.992Z · LW(p) · GW(p)

(The parent is a long comment and makes several points, so I’m going to answer it in several parts. This is part 2.)

I hope it is also clear how the original comment in this chain is not a good pointer towards this presentation of a way in which Benquo’s post could be deeply mistaken.

The point implied by the original comment in this chain is absolutely critical. It disappoints me to see it (mostly) dismissed, because I think it is emblematic of a deep and pervasive problem with current trends in “rationalist” thought. (I elaborate somewhat further in this comment.)

One hypothesis that seems too simplistic to be right is something like “Said is generally skeptical of counterfactual reasoning.”

In a sense, this is, indeed, accurate. I do not mean to recapitulate, here, in this comment thread, the entirety of the philosophical debates about counterfactuals, but my view (which to me to be relatively uncontroversial) is this:

… suppose I perform some behavior (like mashing potatoes with a glass), and we then discuss what would have happened if instead I performed a different behavior (like mashing potatoes with a whisk).

It seems to me that we can have a meaningful discussion about this counterfactual scenario to the extent that we can transform it into questions like:

  1. “What has happened in the past, in situations where I have mashed potatoes with a whisk?”
  2. “What has happened in the past, in situations similar to the above—for example, cases where I have mashed turnips with a whisk, or mashed potatoes with a fork?”
  3. “What do I predict will happen in the future, if I do any of the above things?”

#1 and #2 concern mundane facts, which are known to us. (We can also modify them along the lines of “not I, but my friend Ann, mashed potatoes with a whisk”, or “I heard from my mother about someone who mashed potatoes with a whisk”, etc.) #3 concerns predictions, which become informative once made and then tested.

In the “time-travel story case”, #1 doesn’t apply for obvious reasons; #2 doesn’t apply unless you relax the standard of similarity so far that it becomes useless; #3 also doesn’t apply because—at least for the foreseeable future—we can’t do this experiment. So any discussion about “what would happen” in this imaginary scenario is nonsense; nothing we can say about “what would happen” can be true, or false, in any meaningful way.

It seems to me like you have many responses, ranging from “I agree, because X” to “I disagree, because Y” to “I don’t think this question is resolvable” to “I don’t think this question is interesting.” It seems to me like there’s interesting material in the first two responses, even if the third response is in fact valid.

The question is unresolvable and uninteresting; I don’t see what there is to agree or disagree about.

A potentially absurd example: it seems to me like there’s a consistent view in which the mathematical technique of proof by contradiction is classed with “generalizing from fictional evidence.”

I agree that this is absurd. What you’re talking about here is formal systems; that is a very different case. (I originally typed out an extensive rebuttal of your example, but to be honest, it seems to me like it’s simply a non sequitur, which makes the rebuttal moot.)

Replies from: Vaniver, Bastian Sommerfeld
comment by Vaniver · 2018-08-31T22:47:05.873Z · LW(p) · GW(p)
The point implied by the original comment in this chain is absolutely critical.

There are two tracks here:

1) If the point is critical, implying it is perhaps not sufficient, and the thing should be spelled out. Short comments often imply many distinct generators.

In particular, the generator that I think is most obvious is "there's a genre mismatch between 'the ordinary means by which people navigate their lives' and 'time-travel stories', as the first is nonfictional and the second is fictional; don't generalize from fictional evidence!", but the position that I saw as a more serious objection from Benquo's point of view was "while there are powerful models here, there's also insight porn that feels powerful but isn't; it is not clear that the dimension you highlight separates the two as opposed to leading you towards insight porn."

2) Whether or not the point is correct. As it happens, I think the second concern (that this doesn't reliably distinguish insight porn from true insight) is interesting and important, and that the first concern (that time travel stories are completely distinct from ordinary lives) is mistaken.

A brief comment on why it's mistaken: Robinson Crusoe is a fictional example, yes, but it's a fictional member of a real class, and in explanatory pieces you should expect the author to use examples the audience will know, and those will generally be fictional examples because of both higher audience recognition and fictional examples can more crisply separate out the real thing. The ordinary means by which people navigate their lives includes losing some foundations of support, venturing into the unknown, and making tools out of their constituent parts; there is a meaningful way in which any programmer who opens up a new text window is doing something cognitively similar to Robinson Crusoe.

[A frame people can adopt, which is sometimes useful, is that they're an amnesiac time traveler from 3018; what thing can they do now, even though they don't remember what it is? See Archimedes Chronophone [LW · GW]: the point here is a subtle one that's not "remember something from the future" because you've forgotten it, it's "what happens if I take seriously the possibility that there are major opportunities that are accessible to someone now just because they know something, and what modes of thought might lead to discovering that thing?".]

[This is just responding to points that are easy to respond to contained in the parent; my overall sense is "it might take a post to point at what's going on here, and so I'm going to try to write that post instead of handle it here."]

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-31T23:07:24.830Z · LW(p) · GW(p)

This is as good a time as any for me to mention that this term “generators”, which you’ve used a few times now, is not familiar to me in this context. I think I can sort of guess the general meaning from context, but I’m really not sure. Where is it from? Is it idiosyncratic to you, or…?

Anyhow, as to your #1, both objections you describe are important. Yes, don’t generalize from fictional evidence, and also avoid insight porn.

Robinson Crusoe is a fictional example, yes, but it’s a fictional member of a real class …

What class? People who’ve gotten shipwrecked? Or people who’ve gotten shipwrecked and managed to sustain themselves via their resourcefulness, etc.? Or something else?

If an author chooses to use a fictional example, then the specific real class of which the fictional example is a member should be identified explicitly, and as many examples as possible of real members of that class should be provided.

… and in explanatory pieces you should expect the author to use examples the audience will know, and those will generally be fictional examples because of both higher audience recognition and fictional examples can more crisply separate out the real thing.

I do not at all agree that this is a reasonable expectation. In fact, I think that reliance on fictional examples is a deep and pervasive problem in “rationalist” writing (and thought), and one which has done much to corrupt the epistemics of the rationalist community/movement. I can hardly think of terms too strong in which to object to this practice. I think it would be a very good idea to excise it, root and stem. (Perhaps, one day, we may trust ourselves with the use of fictional examples once more; but not now, and not for some time.)

The ordinary means by which people navigate their lives includes losing some foundations of support, venturing into the unknown, and making tools out of their constituent parts; there is a meaningful way in which any programmer who opens up a new text window is doing something cognitively similar to Robinson Crusoe.

I’m sorry, but I think that this is an absurd analogy. This is “in some sense…” type reasoning taken much, much too far.

(As for the bit about the chronophone, well… I don’t think it’s critical to your points, so I won’t take up more space and time with my views on it. But if you think it’s a critical point, then I’ll respond to that, as I certainly do have opinions on it.)

comment by Bastian Sommerfeld · 2018-08-31T07:57:35.068Z · LW(p) · GW(p)

I agree time travel is nothing which applies to #1,#2 or #3. However Im curious:

Are time travel stories not taking points #1 and #2 in order to create a szenario to explore #3? Meaning I create a situation and in order to make it explorable and detatch it from people's expectations I use the literary device of time travel.

comment by Raemon · 2018-08-30T20:00:10.787Z · LW(p) · GW(p)
But what does this have to do with the OP?

I think I can explain this better than it's currently been explained, but also it doesn't feel like you're engaging with the OP's goals or thesis, and would want to see more effort from you to understand it before engaging further.

(I think in this thread you've raised some interesting points about what sort of things are worth learning and teaching, but they were mostly unrelated to Benquo's point)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-30T22:29:10.395Z · LW(p) · GW(p)

It perplexes me to see you say this.

As I say in the latter part of this comment [LW(p) · GW(p)], it seems to me that Benquo’s thesis is fundamentally confused/misguided, in that it attempts to conflate two things which it not only does not make sense to conflate, but is a bad idea to try and do so.

Perhaps you disagree, or think that I’ve misunderstood. Fair enough. But it is strange to say that I haven’t engaged. The linked comment is absolutely the heart of my counterpoint. Almost no one (certainly not Benquo himself) responded to what I said there, or said anything at all relevant, with the exception of [LW(p) · GW(p)] Vaniver (whose comment I did not entirely understand, due—apparently?—to terminological issues, but to which I did respond, as you see).

Again, if you think I’ve misunderstood or you disagree with my view, then please do say why; but the claim that I simply haven’t engaged with the OP’s points strikes me as unsupportable.

comment by Bastian Sommerfeld · 2018-08-29T11:09:51.453Z · LW(p) · GW(p)

I'd like to understand why you think the explanation of yeast is inadequate and why, in your opinion, the adequacy of the explanaiton of yeast is of importance to the topic of the article, namely the exploration and typification of a certain style of explaining things.

Replies from: Benquo
comment by Benquo · 2018-08-29T18:41:40.994Z · LW(p) · GW(p)

Said did specify that it's inadequate for the isolation of supermarket yeast. I agree, but think that this is beside the point; I was trying to give an adequate description of yeast's role in making bread, not its role specifically in modern industrialized breadmaking.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-29T19:44:51.076Z · LW(p) · GW(p)

First of all, bread-making need not be “industrialized” in order to—for all practical purposes—require commercially-produced yeast. If you doubt this, then please provide “from-scratch” (i.e., without using store-bought yeast) recipes of all of the following (results must be indistinguishable from those produced with store-bought yeast):

  1. Challah
  2. “Black bread” (wheat+rye breads such as Darnitsky or Orlovsky)
  3. Brioche
  4. Pizza dough

Second, if you want to explain yeast’s role in making bread, it is not enough to comment that yeast gives off CO2 and thereby leavens the dough. You have to answer the following questions (can you do this without consulting Wikipedia?):

  1. Why should dough be leavened? You say this makes it “nicer” to eat, but how?
  2. Is yeast the only way to leaven dough? What are the two other common leavening methods?
  3. Why are the other leavening methods inappropriate for bread? (Or are they?)

In other words, for an answer to be fully satisfying, to impart the kind of understanding that lets us make predictions, then it needs to not only answer the question of “why X?”, but also “why X and not Y or Z?” (In this case: “why yeast, and not any of the alternatives?”)

Finally, you say:

Zetetic explanations are empowering. First, the integration of concrete and model-based thinking is checkable on multiple levels—you can look up confirming or disconfirming facts, and you can also validate it against your personal experience or sense of plausibility, and validate the coherence and simplicity of the models used. Second, they affirm the basic competence of humans to explore our world. By centering the process of discovery rather than a finished product, such explanations invite the audience to participate in this process, and perhaps to surprise us with new discoveries.

Having read your story about yeast, what am I now empowered to do, that I previously could not? Make sourdough? But I could already do that; one does not need to know almost any of the stuff you said, in order to make sourdough. (Here’s a page that actually teaches you how to do it. Note that the only significant element it shares with your explanation, is the point that yeast—and the bacteria that are also critical to a sourdough starter—are omnipresent in the environment. Indeed, I don’t even need to know that; I can simply assume that water and flour, left alone, turn into a sourdough starter by sheer magic, and the process will still work fine.) Make commercial-grade yeast? Nope, I couldn’t do that before and I still can’t do it.


Fundamentally, I think that you are conflating two very different things.

First, there are the sorts of explanations that fill in an overall, unified, coherent view of the world. (“Big History” is perhaps the purest example of this approach; and the sort of perspective advocated by Tooby & Cosmides in “The Psychological Foundations of Culture” is a classic example. Other examples abound, of course.) These are, indeed, valuable; they broaden our horizons, and allow us to understand our world as a single, continuous system, that encompasses all the phenomena we are aware of, on all the scales we can perceive. Such a perspective indeed has many benefits. The trouble is, it is difficult to construct, and takes far, far more effort, and more detail, and more scope, than what you’ve provided here. It is also very difficult to be sure that any given part of that unified perspective is correct. Verification is tedious and fraught with peril of error. And one must gain a very broad picture indeed, before it’s possible to use that unified perspective for any practical purposes.

Then there are the sorts of explanations that do, in fact, empower you to accomplish specific goals which you previously could not accomplish or even consider. These look very different from the other sort. They tend to be practical, specific, and circumscribed.[1]

Attempting to combine these things yields insight porn.


[1] To take literally the first example that comes to mind: most people don’t realize that they can easily make vanilla extract—a classic “magical thing produced by a mysterious Scientific-Industrial priesthood in special temples called laboratories or factories”—at home. How? Well, almost any flavor extract you can buy in the store is simply a solution of flavor-bearing compounds in alcohol—which is, as we all learn in chemistry class, an excellent solvent. Therefore, simply buy some vanilla beans, slice them open, and immerse them in vodka for several months. Voilà: vanilla extract. (Note the brevity of this explanation, the lack of history-lesson digressions, etc.)

Replies from: Benquo, habryka4, Benquo, Benquo
comment by Benquo · 2018-08-29T21:37:47.927Z · LW(p) · GW(p)
results must be indistinguishable from those produced with store-bought yeast

Why? Why would I care?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-29T21:42:09.028Z · LW(p) · GW(p)

That is simply an unambiguous standard of evaluation. If, instead, you prefer to aim for “superior to the results produced with store-bought yeast”, by all means, have at it. Relative quality is more difficult to agree upon than indistinguishability, of course; but if your results are sufficiently better, then this concern may not apply in practice.

Should I take your reply to mean that you do, in fact, have “from-scratch-yeast” recipes for all four types of baked goods I listed? I confess, I am now somewhat excited to see them!

comment by habryka (habryka4) · 2018-08-29T20:15:03.540Z · LW(p) · GW(p)

I think this is a great comment, and I would maybe like to see this broken out into its own post (after making it a bit more general than this specific circumstance)

Replies from: Benquo
comment by Benquo · 2018-08-29T21:39:19.117Z · LW(p) · GW(p)

comment by Benquo · 2018-08-29T21:52:02.731Z · LW(p) · GW(p)
Having read your story about yeast, what am I now empowered to do, that I previously could not?

For one thing, make hard cider from nothing but fresh apples.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-29T22:02:45.287Z · LW(p) · GW(p)
  • Cmd-F “cider”: No hits
  • Cmd-F “apples”: No hits

I beg to differ.

Of course, now that you’ve pointed it out, I know what you mean—or do I? I still don’t have anything remotely like a procedure for making cider. But tell me: have you made hard cider from nothing but fresh apples? If so, how did it turn out?

A second’s searching found this website: howtomakehardcider.com. The author of this site says:

Yes, you can make simple “hard cider” with bread yeast, a plastic jug and a balloon on top. If you want help with these crude methods, look for another website, and don’t invite me over for a taste. Blech.

This would seem to refer to the sort of method which you imply. (Right? I’m not quite sure… which is another problem with your claim!) Do you disagree with this fellow’s assessment? When he says that what you need to make hard cider is “Brewing yeast (NOT bread yeast)”, is he wrong?

Are you, in fact, claiming that either you personally, or someone whom you consider quite reliable (as opposed to, for example, “some guy on reddit”), have made hard cider from nothing but fresh apples, and it turned out well (drinkable, delicious, etc.)?


Edit: I was going to make the following point in a follow-up comment, but since Benquo has chosen to disengage [LW(p) · GW(p)] (which is certainly his right), I’ll put this here, for the benefit of others reading it:

Suppose that I, having read Benquo’s post, have this insight that “Oh! If wild yeast is everywhere, and it eats sugar, and it produces alcohol, then… I can… just kind of… leave apples sitting around… and they’ll turn into cider?? Right?!”

And suppose I try doing this. What will happen?

What will happen is that I will produce something terrible, and I will be lucky if I don’t give myself food poisoning (due to mold, e.g.).

And then—assuming the experience doesn’t put me off cider-making permanently—I will go online, and I will search for instructions on how to make cider (such as the site I linked above). And those instructions are going to describe a process that is much more complex than the one Benquo implies, and this process will require specialized equipment, and techniques which I could never simply deduce myself from first principles; and, most importantly of all, they will require commercially produced yeast.

But, of course, I could have simply done that in the first place. The “zetetic” explanation—and the misguided attempt to deduce some practical technique from it—adds nothing.

Replies from: Douglas_Knight, habryka4
comment by Douglas_Knight · 2018-08-30T17:52:56.993Z · LW(p) · GW(p)

From the first site, I think a clearer statement is from this more specific page, which says

Yeast: Wild or Domesticated
The real truth of using wild yeast is it is just going to depend on where your wild yeast comes from. Most folks at cider mills swear by using wild yeast, but that is because the yeast that lives at their apple processing facilities is especially adapted to work with apples. What about the yeast floating around in your kitchen (or bathroom?) You could end up with fantastic cider, horrible cider, or even vinegar (actually made from bacteria, but I digress).
Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-30T18:20:59.169Z · LW(p) · GW(p)

Indeed. And this, of course, goes to my point: you need to know and understand the specific domain in question in great detail (far greater than that provided in the OP) to make informed decisions, to accomplish anything.

comment by habryka (habryka4) · 2018-08-30T00:33:44.305Z · LW(p) · GW(p)

Not aiming to be a full response, but doesn't the StackExchange link you shared basically say that non-sterile cider isn't a real issue and that the straightforward thing should basically work?

Basically just wash the apples very well before pressing and practice good sanitation during production.
Use a desired yeast instead of chancing with wild fermentation. Few wild yeasts actually produce favorable results.
Once fermentation is complete the health risks are minimal. Fermented cider is an environment where harmful pathogens can't survive long. Some molds can produce neural toxins, so if you have black / blue mold use caution.

The second paragraph seems to indicate a problem with using wild yeasts, but my model is that this is basically just about taste, and not about any real risks.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-30T03:04:34.047Z · LW(p) · GW(p)

Ah, let me clarify: I linked that page not to claim that contamination/poisoning is a risk, but merely to support the claim that using wild yeast would not yield a satisfactory result (which is why I linked it from the part of the my comment’s text that was about results, not the part about risk).

comment by Benquo · 2018-08-29T21:40:55.260Z · LW(p) · GW(p)
bread-making need not be “industrialized” in order to—for all practical purposes—require commercially-produced yeast.

Where do you think commercially-produced yeast comes from?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-08-29T21:48:17.455Z · LW(p) · GW(p)

I think it comes from special temples called laboratories or factories, where it is produced by a mysterious Scientific-Industrial priesthood. Why? Where do you think it comes from?

Do you mean to imply that you can, in the comfort of your own home, produce yeast which is as effective, for all the applications for which it’s used, as this stuff? (Or, even, all the applications for which it’s used by a home baker?) This is an exciting claim! Have you attempted to make money from being able to do so? Or, if you are not inclined to monetize this skill—certainly an understandable position!—would you consider writing a post detailing the process?

Replies from: Benquo
comment by Benquo · 2018-08-29T22:08:56.787Z · LW(p) · GW(p)

It seems like you're trying to misunderstand here, and being sarcastic about it, and I'm not going to engage further.