The Golden Mean of Scientific Virtues
post by adamShimi · 2024-07-08T17:16:21.676Z · LW · GW · 4 commentsThis is a link post for https://epistemologicalfascinations.substack.com/p/the-golden-mean-of-scientific-virtues
Contents
Inquiring and Trust Taste and Importance Lightness and Diligence None 4 comments
I recently discovered this nice post on the scientific virtues by slimemoldtimemold.
Overall, I enjoyed it, and I find it pushes nicely against some recurrent memes about science and scientists (and innovation in general) in the general culture.
Yet in doing so, it also reinforces a set of opposite memes that can be as wrong, and which I have seen many an aspiring scientist fall into. These will not be a surprise to any reader of pop history of science, because they mostly follow from the aspect of science history that are easier to hype and turn into exciting story with entertaining character.
Let’s start with the list of scientific virtues from the original post:
The scientific virtues are:
- Stupidity
- Arrogance
- Laziness
- Carefreeness
- Beauty
- Rebellion
- Humor
These virtues are often the opposite of the popular image of what a scientist should look like. People think scientists should be intelligent. But while it’s helpful to be clever, it’s more important to be stupid. People think scientists are authority figures. Really, scientists have to defy authority — the best scientists are one step (or sometimes zero steps) away from being anarchists. People think scientists are arrogant, and this is true, but we worry that scientists are not arrogant enough.
They’re written in a provocative way to force you to think, helping the post to foster what it preaches preaches. And I mostly agree even with the provocative names.
What bothers me more is that some of these virtues are pointing to important core of scientific practice by emphasizing the opposite extreme from the usual virtue, whereas the actual virtue is often found in the golden middle.
Inquiring and Trust
One recurrent theme in the essay is how many of these virtues reinforce the virtue of rebellion:
Stupidity can also be part of the inspiration behind the virtue of rebellion, a scientist’s ability to defy authority figures. If you’re stupid, you don’t realize when you should keep your mouth shut, so you say what you really think.
and
Like stupidity, arrogance is linked to the virtue of rebellion. If you think you are hot shit, you will not be afraid to go against the opinions of famous writers, ivy-league professors, public officials, or other great minds.
And rebellion itself is shown as an important virtue:
Rebellion is one of the highest scientific virtues. It is supported by stupidity — because you have to be pretty dumb to bet against the status quo and think you can win. It is supported by arrogance — in that you must be pretty arrogant to think you know better than the experts. It is supported by aesthetics — because seeing the possibility for a more beautiful experiment, a more beautiful theory, a more beautiful world is needed to inspire your rebellion. It is supported by carefreeness — not worrying about whether you win or lose makes the struggle against authority that much easier. Whenever possible, rebellion should be fun.
Why is rebellion so important in this frame?
Because it pushes back against blindly following authority figures. It lets you questions things that are considered obvious, or wrong, or best left to authorities. Which is indeed a key necessary condition to being a scientist or innovator of any kind.
But this quality is much subtler than just rebellion, as I expect the author understands. What they’re trying to light is a fire that enables rebellion in those who have been beaten into submission by authorities, the system, education, and who thus lost the ability to question anything.
Yet what the scientist has is the optionality of questioning: the option, not the obligation to question something. You can easily halt everything to a grind if you start believing that you absolutely need to question everything, that nothing can ever be used or accepted which doesn’t pass the test of your judgment.
Even worse than that, you become basically unable to coordinate with others. Think about it: any significant group effort will always lead to separation of tasks and responsibilities, which almost always means that you need to trust the others to do their job and their part. That doesn’t mean you cannot give feedback or suggest things, but you cannot impose the burden (on them and on you) that everything must make perfect sense to you.
If we put our caricatural scientist with all the virtues extolled in the original post in such a collaborative situation, they will have a lot of trouble doing any such deference. Because they will have to question everything, to understand everything and reconstruct them.
So the actual virtue, the golden mean that is so elusive, is to find when and where to spend your inquiring points. When is it not worth it, either because it’s not your priority or because some trust is needed for the sheer scale of your common endeavor.
Why is this not clear in the original post and most of pop history of science? Mostly, the scientists which are most well-known and easiest to chronicle in fascinating narratives tend to be the ones whose work is really obviously attributable to them: theoreticians that build on a strong tradition, like Einstein, Feynman, Bohr, Poincaré; earlier gentlemen scientists who needed to do almost everything themselves (Darwin)… The post mention others (McClintock, Curie, Ramon y Cajal…) but almost never mentions their collaborative efforts, I expect because it’s harder to bring into context any such complex teamwork.
(A nice example of actual history of science which shares this collaborative aspect well is “Image and Logic” by Peter Galison, a history of particle physics experiments)
Taste and Importance
Another big emphasis in the original post is around doing what makes sense to you, what makes your heart sing, what naturally swell up your curiosity.
If you do not cultivate the sense of carefreeness, you will get all tangled up about not working on “important” problems. You will get all tangled up about working on the things you think you “should be” working on, instead of the things you want to be working on, the things you find fun and interesting.
If research starts to be a drag, it won’t matter how talented you are. Nothing will kill your spark faster than finding research dull. Nothing will wring you out more than working on things you hate but you think are “important”.
and
You might say, “well surely someone has to think about these practical problems.” It’s true that some people should think about worldly things, but we don’t exactly see a shortage of that. What cannot be forced, and can only be cultivated, are free minds pursuing things that no one else thinks are interesting problems, for no good reason at all.
and
The fifth virtue that a scientist must cultivate is an appreciation for beauty. There are practical reasons to do science, but in the moment, great research is done just to do something because it’s beautiful and exemplifies enjoying that beauty.
This eye for beauty is not optional! It is, like all the scientific virtues, essential for doing any kind of original research.
Taste is what this is about. Having this developped intuition for the thing, that let’s you pick up potentially relevant and unexplored avenues for your research. I completely agree that research is incredibly hard, if not impossible, without taste.
I also agree with the general vibe of the previous quotes, that you cannot easily build taste for what you find a complete bore and drag. It’s because building taste require really seeing the thing, thinking about it over and over again, returning to it, some form of Tetris Effect even.
Yet the problem I see here, that I myself experienced, is that the prevailing intuition about how taste develops is completely miscalibrated. People try things out, study at certain places, choose certain research projects, all of that somewhat randomly, and when something starts clicking for them, they often latch onto it as if it’s “the one topic”.
Instead, taste is much more within your personality, what you see first, what keep your attention. Which means that taste is retargettable, and so you’re much less bounded on what you can do research on that appears at first glance.
To give a concrete example, I’m naturally an abstract, conceptual, meta-thinker. That means that what by default what excites me are the systems, not the details: I see new fields and I’m excited by the general patterns or the link I see with other things, not the nitty gritty details of the phenomenon under study. Yet with experience I have learned that I can, if I actually try, start to see the same systems and patterns and abstract idea at a much more concrete level, when trying to understand the wealth of data around a single concrete thing (like a programming language or a loaf of sourdough bread).
And what this has allowed me to do, although not perfectly, is to retarget research towards things that I feel are important. To not get nerd-sniped solely by the first cool idea I see, but to be able to decide how much effort I want to spend on what; and when I work on important causes (making the world go well despite existential risk), I can still find the excitement and curiosity of research because there are always some elements that fascinates me within what is needed.
Which is not to say that I’m against the idea of doing pure research for its own sake. A society that fosters this kind of research is buying optionality, and many such apparently useless research catches a lot of black swans down the road. But I do think it’s worth having an explicit distinction between this and the kind of research that directly attempt to tackle important problems, and not to convince aspiring scientists that they can only work on the latter if they have deep innate interest in them.
Ironically, the original post contains all the material necessary to discuss this in the context of the nuclear elephant in the room: the Manhattan project to develop the atomic bomb. This was a massive project that focused on doing something important rather than something fun, and yet the physicists (including Feynman, Bohr, and others mentioned in the post) still managed to make groundbreaking work. Some of the funny Feynman quotes are even about him visiting plants for the atomic bomb work!
Lightness and Diligence
Lastly, I find that the original post emphasizes a bit too much the carefree/laziness part of being a scientist.
Laziness is not optional — it is essential. Great work cannot be done without it. And it must be cultivated as a virtue, because a sinful world is always trying to push back against it.
and
The hardest of the scientific virtues to cultivate may be the virtue of carefreeness. This is the virtue of not taking your work too seriously. If you try too hard, you get serious, you get worried, you’re not carefree anymore — you see, it’s a problem.
To be clear, the post makes a much better job at tempering this one than my other criticisms; it includes a couple of lines on the importance of hard work.
Everyone knows that research requires hard work. This is true, but your hard work has to be matched by a commitment to relaxation, slacking off, and fucking around when you “should” be working — that is, laziness.
and
Hard work needs to happen to bring an idea to fruition, but you cannot work hard all the time any more than a piston can be firing all the time, or every piston in an engine can fire at once.
Yet the big vibe you get out of the post, and of a lot of pop history of science, is that you really should spend a lot of time just doing nothing, looking at the sky and thinking whatever, and that’s how deep thoughts and results come about.
I think this unfortunate impression is mostly fostered by the massive overemphasis on theoretical physicists and mathematician, which by definition can do most of their work in their head, and have very little administrative burdens on them: the Einsteins, Feynmans, Bohrs, Maxwells,…
Yet deep scientific knowledge doesn’t just come from random theorizing: it emerges from a systematic bedrock of observations, which you can then compress and ground and explore and build theories and compare them and break them. As one of my friends likes to say, at the beginning of every science, there is one person that is cataloguing rocks.
This cataloguing of rocks requires the virtue of diligence, which is completely missing from the original post. One big difference between a rock cataloguer and most people is that the former takes the time to notes everything that might be relevant about what they study, again and again, often without knowing what will come out of it.
That being said, what I think the original post is trying to convey is that even in this diligence and hard work, there should ideally be a lightness, a playfulness, a curiosity. That just gritting your teeth through the cataloguing won’t work — and I agree.
But once again, you can definitely try to open your eyes and look for what is interesting, fascinating, exciting (to you and your taste) in whatever you’re doing.
Here too the post contains the germs for discussing this idea of diligence, given the mention of multiple experimentalists who spent their life doing incredibly detailed and systematic work (Darwin, Currie, Ramon y Cajal, McClintock…). But instead they are only leveraged to reinforce the pop-view of the scientist, instead of deepening the theoretically-focused portrait.
4 comments
Comments sorted by top scores.
comment by mesaoptimizer · 2024-07-08T22:09:28.085Z · LW(p) · GW(p)
I really appreciate this essay. I also think that most of it consists of sazens [LW · GW]. When I read your essay, I find my mind bubbling up concrete examples of experiences I've had, that confirm or contradict your claims. This is, of course, what I believe is expected from graduate students when they are studying theoretical computer science or mathematics courses -- they'd encounter an abstraction, and it is on them to build concrete examples in their mind to get a sense of what the paper or textbook is talking about.
However, when it comes to more inchoate domains like research skill, such writing does very little to help the inexperienced researcher. It is more likely that they'd simply miss out on the point you are trying to tell them, for they haven't failed both by, say, being too trusting (a common phenomenon) and being too wary of 'trusting' (a somewhat rare phenomenon for someone who gets to the big leagues as a researcher). What would actually help is either concrete case studies, or a tight feedback loop that involves a researcher trying to do something, and perhaps failing, and getting specific feedback from an experienced researcher mentoring them. The latter has an advantage that one doesn't need to explicitly try to elicit and make clear distinctions of the skills involved, and can still learn them. The former is useful because it is scalable (you write it once, and many people can read it), and the concreteness is extremely relevant to allowing people to evaluate the abstract claims you make, and pattern match it to their own past, current, or potential future experiences.
For example, when reading the Inquiring and Trust section, I recall an experience I had last year where I couldn't work with a team of researchers, because I had basically zero ability to defer (and even now as I write this, I find the notion of deferring somewhat distasteful). On the other hand, I don't think there's a real trade-off here. I don't expect that anyone needs to naively trust that other people they are coordinating with will have their back. I'd probably accept the limits to coordination, and recalibrate my expectations of the usefulness of the research project, and probably continue if the expected value of working on the project until it is shipped is worth it (which in general it is).
When reading the Lightness and Diligence section, I was reminded of the Choudhuri 1985 paper, which describes the author's notion of a practice of "partial science", that is, an inability to push science forward due to certain systematic misconceptions of how basic (theoretical physics, in this context) science occurs. One misconception involves a sort of distaste around working on 'unimportant' problems, or problems that don't seem fundamental, while only caring about or willing to put in effort to solve 'fundamental' problems. The author doesn't make it explicit, but I believe that he believed that the incremental work that scientists do is almost essential for building their knowledge and skill to make their way forwards towards attacking these supposedly fundamental problems, and the aversion to working on supposedly incremental research problems leads people to being stuck. This seems very similar to the thing you are pointing at when you talk about diligence and hard work being extremely important. The incremental research progress, to me, seems similar to what you call 'cataloguing rocks'. You need data to see a pattern, after all.
This is the sort of realization and thinking I wouldn't have if I did not have research experience or did not read relevant case studies. I expect that Mesa of early 2023 would have mostly skimmed and ignored your essay, simply because he'd scoff at the notion of 'Trust' and 'Lightness' being relevant in any way to research work.
Replies from: adamShimi↑ comment by adamShimi · 2024-07-09T08:30:58.369Z · LW(p) · GW(p)
However, when it comes to more inchoate domains like research skill, such writing does very little to help the inexperienced researcher. It is more likely that they'd simply miss out on the point you are trying to tell them, for they haven't failed both by, say, being too trusting (a common phenomenon) and being too wary of 'trusting' (a somewhat rare phenomenon for someone who gets to the big leagues as a researcher). What would actually help is either concrete case studies, or a tight feedback loop that involves a researcher trying to do something, and perhaps failing, and getting specific feedback from an experienced researcher mentoring them. The latter has an advantage that one doesn't need to explicitly try to elicit and make clear distinctions of the skills involved, and can still learn them. The former is useful because it is scalable (you write it once, and many people can read it), and the concreteness is extremely relevant to allowing people to evaluate the abstract claims you make, and pattern match it to their own past, current, or potential future experiences.
I wholeheartedly agree.
The reason why I didn't go for this more grounded and practical and teachable approach is that at the moment, I'm optimizing for consistently writing and publishing posts.
Historically the way I fail at that is by trying too hard to write really good posts and make all the arguments super clean and concrete and detailed -- this leads to me dropping the piece after like a week of attempts.
So instead, I'm going for "write what comes naturally, edit a bit to check typos and general coherence, and publish", which leads to much more abstract pieces (because that's how I naturally think).
But reexploring this topic in an in-depth and detailed piece in the future, along the lines of what you describe, feels like an interesting challenge. Will keep it in mind. Thanks for the thoughtful comment!
Replies from: mesaoptimizer↑ comment by mesaoptimizer · 2024-07-10T06:36:47.567Z · LW(p) · GW(p)
I’m optimizing for consistently writing and publishing posts.
I agree with this strategy, and I plan to begin something similar soon. I forgot that Epistemological Fascinations is your less polished and more "optimized for fun and sustainability" substack. (I have both your substacks in my feed reader.)
Replies from: adamShimi