Posts
Comments
(1) Physics generally seems like a trustworthy discipline - the level of rigor, replicability, lack of incentive for making false claims, etc. So base rate of trust is high in that domain.
(2) There doesn't seem to be anyone claiming otherwise or any major anomalies around it, with the possible exception of how microscopic/quantum levels of things interact/aggregate/whatever with larger scale things.
(3) It would seem to need to be at least correct-ish for a lot of modern systems, like power plants, to work correctly.
(4) I've seen wood burn, put fuel into a car and then seen the car operate, etc.
(5) On top of all of that, if the equation turned out to be slightly wrong, it's unlikely I'd do anything differently as a result so it's not consequential to look very deeply into it (beyond general curiosity, learning, whatever).
As a personal convention, I don't assign probability something is true above 99% for anything other than the very most trivial (2+2=4). So I'm at 99% E=mc2 is correct enough to treat it as true - though I'd look into it more closely if I was ever operating in an environment where it had meaningful practical implications.
On the contrary - this is a strict materialist perspective which looks to disambiguate the word 'trauma' into more accurate nouns, and replace the vague word 'heal' with more actionable and concrete verbs.
I think there's often a language/terminology challenge around these areas. For instance, at different times I had a grade 3 ankle sprain after endurance training, and a grade 2 wrist sprain after a car crash - those are clearly acute trauma (in the medical meaning of the word) and they do require some mix of healing to the extent possible for recovery of physical function.
But I've always found it tricky that the same word 'trauma' is used for physical injuries, past bad experiences, and as a broad description of maladaptive patterns of thought and behavior.
It's a broad word that people use in different ways.
Two things I've found useful.
(1) Highest recommendation for Lakoff's Metaphors We Live By (1980) which looks at conceptual metaphors:
https://en.wikipedia.org/wiki/Metaphors_We_Live_By
From Chapter 7, "Personification":
Perhaps the most obvious ontological metaphors are those where the physical object is further specified as being a person. This allows us to comprehend a wide variety of experiences with nonhuman entities in terms of human motivations, characteristics, and activities. Here are some examples:
- His theory explained to me the behavior of chickens raised in factories.
- This fact argues against the standard theories.
- Life has cheated me.
- Inflation is eating up our profits.
- His religion tells him that he cannot drink fine French wines.
- The Michelson-Morley experiment gave birth to a new physical theory.
- Cancer finally caught up with him.
In each of these cases we are seeing something nonhuman as human. But personification is not a single unified general process. Each personification differs in terms of the aspects of people that are picked out.
Consider these examples.
- Inflation has attacked the foundation of our economy.
- Inflation has pinned us to the wall.
- Our biggest enemy right now is inflation.
- The dollar has been destroyed by inflation.
- Inflation has robbed me of my savings.
- Inflation has outwitted the best economic minds in the country.
I think a lot of discussion around the word "trauma" follows these characteristics — the challenge is, a lot of times people move between a literal well-scoped definition of trauma, say the medical one, and a more metaphorical/ontological description. People often do this without noticing it.
For instance, I can talk about the acute trauma of the wrist injury from a car crash, and everyone will largely understand what I'm talking about. But the same word 'trauma' will often be used if I had described some fear or aversion of getting into cars going forwards. I don't have one, but if I did, people would refer to both the wrist injury and the thing which caused the aversion to cars as 'trauma' — which seems somewhat confused to me. Clearly a wrist injury needs healing, in the biological and medical sense of the word healing.
Does an aversion to getting into cars need "healing" in the same way? I mean, maybe, if you've got a definition of "healing" from neuroscience around how incoming information is processed and how chain reactions of synapses firing in response to a stimuli that produces a maladaptive behavioral pattern is classified as "healing." But - like, probably not. "Healing" in that context is a metaphor.
For my part, and just speaking for myself, I think the term "extinction" — though less in line with the current cultural milieu — is a much better word than "healing" for removing maladaptive emotional and behavioral patterns.
https://en.wikipedia.org/wiki/Extinction_(psychology)
In my way of thinking about it,
- A traumatic wrist injury is repaired by physical healing.
- An irrational aversion to getting in cars is repaired by extinction of the behavior.
How to do the latter — talk-oriented therapies, exposure therapy (which is typically recommended for phobias), practice and training on implementing good patterns in similar situations to ones where you've displayed undesirable patterns of behavior, cognitive behavioral therapy if you're ruminating too much, etc - well, unfortunately there's no consensus currently on what works the best for any given case.
But I think starting with a model of "I need to heal" is questionable. Relatedly, I'm also skeptical of using the word "heal" for biochemical imbalances — for biochemical-based depression, for instance, I think "I need to get my hormones and biochemistry better-regulated to remove depressive symptoms" is a a mix of more actionable, more accurate, and more subjectively empowering than "I need to heal from depression."
Anyway, this goes strongly against the current cultural milieu - and I haven't been maximally precise in the comment. A lot could be nitpicked. But I think extinction of maladaptive thought patterns and maladaptive behavior patterns is more easily accomplished (and a more accurate description of reality) than healing; likewise, "regulating" seems more accurate than healing to me on biochemical based phenomenon.
It's been useful for me to think about it this way, and sometimes useful for other people. Though, different things work for different people - so add salt liberally. Regardless, Lakoff's Metaphors is extremely relevant the topic and highly recommended.
Partially agreed again.
I'd be hesitant to label as "Critical" pointing out that someone has an invalid argument, and having it implicitly contrasted against "Positive" — it implies they're opposites or antithetical in some way, y'know?
Also, respectfully disagree with this -
"The specific issue with ‘Not what I meant’ is that the icon reads as ‘you missed’ and not ‘we missed’. Communication is a two-way street and the default react should be at least neutral and non-accusatory."
Sometimes a commentor, especially someone new, is just badly off the mark. That's not a two-way street problem, it's a Well-Kept Garden problem...
I agree that drive-by unpleasant criticisms without substance ("Obtuse") don't seem productive, but I actually think some of the mild "tonally unpleasant" ones could be very valuable. It's a way for an author to inexpensively let a commenter know that they didn't appreciate the comment.
"Not what I meant" seems particularly valuable for when someone mis-summarizes or inferences wrongly what was written, and "Not worth getting into" seems useful when someone who unproductively deep on a fine-grained detail of something more macro oriented.
One challenge, though, is when you have mixed agreement with someone. I disagree on tonal unpleasantness and the grouping style - "Taboo your words" might be friendly, for instance, to keep sharpening discussion, and isn't necessarily critical. But I agree with a meta/bikeshed and clearing up some of the ambiguous ones.
I clicked both "Disagree" and "Agree" on yours for partial agreement / mixed agreement, but that seems kind of unintuitive.
Not sure how many posts you've made here or elsewhere, but as someone who has done a lot of public writing this seems like a godsend. It will reflect poorly on someone who deploys those a lot in a passive aggressive way, but we've all seen threads that are exhausting to the original poster.
This seems particularly useful for when someone makes a thoughtful but controversial point that spurs a lot of discussion. The ability to acknowledge you read someone's comment without deeply engaging with it is particularly useful in those cases.
I turned this on for a recent post and I'm incredibly impressed.
This is the coolest feature I've seen for discussion software in many years.
Highly recommended to try it out if you make a post.
I'm a Westerner, but did business in China, have quite a few Chinese friends and acquaintances, and have studied a fair amount of classical and modern Chinese culture, governance, law, etc.
Most of what you're saying makes sense with my experience, and a lot of Western ideas are generally regarded as either "sounds nice but is hypocritical and not what Westerns actually do" (a common viewpoint until ~10 years ago) with a later idea of "actually no, many young Westerners are sincere about their ideas - they're actually just crazy in an ideological way about things that can't and won't work" that is a somewhat newer idea. (白左, etc)
The one place I might disagree with you is that I think mainland Chinese leadership tends to have two qualities that might be favorable towards understanding and mitigating AI risk:
(1) The majority of senior Chinese political leadership are engineers and seem intrinsically more open to having conversations along science and engineering lines than the majority of Western leadership. Pathos-based arguments, especially emerging from Western intellectuals, do not get much uptake in China and aren't persuasive. But concerns around safety, second-order effects, third-order effects, complex system dynamics, causality, etc, grounded in scientific, mathematical, and engineering principles seem to be engaged with easily at face value in private conversations, and with a level of technical sophistication that there doesn't need to be as much direct reliance on asking for industry leaders and specialists to explain and contextualize diagrams, concepts, technologies, etc. Senior Chinese leadership also seem to be better - this is just my opinion - at identifying credible and non-credible sources of technical information and identifying experts who make sound arguments grounded in causality. This is a very large advantage.
(2) In recent decades, it seems like mainland Chinese leadership are able to both operate on longer timescales - credibly making and implementing multi-decade plans and running them - as well as making rapid changes in technology adoption, regulation, and economic markets once a decision has been made in an area. The most common examples we see in the West are videos of skyscrapers being constructed very rapidly, but my personal example is I remember needing to go pay my rent with shoeboxes full of 100 renminbi notes during the era of Hu Jintao's chairmanship and being quite shocked when China went to near cashless almost overnight.
I think those two factors - genuine understanding of engineering and technical causality, combined with greater viability for engaging in both longer timescale and short-timescale action, seem like important points worth mentioning.
Hmm. Looks like I was (inadvertently) one of the actors in this whole thing. Not intended and unforeseen. Three thoughts.
(1) At the risk of sounding like a broken record, I just wanna say thanks again to the moderation team and everyone who participates here. I think oftentimes the "behind the scenes coordination work" doesn't get noticed during all the good times and not enough credit is noticed. I just like to notice it and say it outright. For instance, I went to the Seattle ACX meetup yesterday which I saw on here (LW), since I check ACX less frequently than LW. I had a great time and had some really wonderful conversations. I'm appreciative of all the people facilitating that, including Spencer (Seattle meetup host) and the whole team that built the infrastructure here to facilitate sharing information, getting to know each other, etc.
(2) Just to clarify - not that it matters - my endorsement of Duncan's post was about the specific content in it, not about any the author of the post. I do think Duncan did a really nice job taking very complex concepts and boiling them down to guidelines like "Track (for yourself) and distinguish (for others) your inferences from your observations" and "Estimate (for yourself) and make clear (for others) your rough level of confidence in your assertions" — he really summed up some complex points very straightforwardly and in a way that makes the principles much easier to implement / operationalize in one's writing style. That said, I didn't realize when I endorsed the Rationalist Discourse post that there was some interpersonal tensions independent from the content itself. Both of those posters seem like decent people to me, but I haven't dug deep on it and am not particularly informed on the details.
(3) I won't make a top-level post about this, because second-degree meta-engagement with community mechanics risks setting off more second-degree and third-degree meta-engagement, and the things spiral. But as a quick recommendation to people interested in how people relate with each other, my favorite movie is Unforgiven, a very non-traditional Clint Eastwood movie. It's like a traditional Western (cowbows, horses, etc) but really very different than the normal genre. Basically, there's only one genuinely unprovoked "bad guy" in the movie, who has causal agency for only about 30-60 seconds of doing something bad. After that, it's all just a chain reaction of people doing as best as they can by their values and friends, and yet the results are very bad for everyone. Incidentally, it's also a really cinematically beautiful movie, which contrasts with the unfolding tragedy. It's a great movie. Highly recommended.
A bit late in commenting and I understand the "mongo" example was pointing at a more general concept, but I decided to check in on the current state of prediction. Not perfect, n=1, could certainly be set out better, but thought I'd give this a whirl:
Me:
Hello, I'd like to test your predictive ability on something interesting and novel. May we?
ChatGPT:
Hello! Of course, I'd be happy to help you with your prediction. What would you like me to predict?
Me:
First, some context — I'm an American tech CEO. I like and have read a lot of classical philosophy, ranging from Aristotle to Wittgenstein and much in between. I'm an amateur historian who has written a few million words of nonfiction with some historical fiction mixed in. I'm logical and technical. I'm in my late 30s. I'm from the East Coast but in Seattle. That's the context.
ChatGPT:
Thank you for the context. Is there a specific question or topic you would like me to predict about based on your background and interests?
Me:
Yes. I'm going to think of a fictional book. It's a book I like - one of my favorites. I'd like you to make a list of 5 guesses of what book I might be thinking of, along with what the most likely of the 5 is.
ChatGPT:
Certainly, I'll do my best to generate a list of 5 guesses for your favorite book based on your background and interests. Here are my predictions:
- "The Brothers Karamazov" by Fyodor Dostoevsky - This novel explores complex philosophical and religious themes, which aligns with your interest in classical philosophy. Its exploration of morality and ethics would also appeal to your logical and technical mindset.
- "The Name of the Rose" by Umberto Eco - This historical mystery novel is set in a medieval monastery and draws upon elements of philosophy and theology, which would align with your interests in both history and philosophy.
- "Infinite Jest" by David Foster Wallace - This novel combines humor, philosophy, and commentary on contemporary American culture, which would align with your interests in both philosophy and fiction. Its complexity would also appeal to your logical and technical mindset.
- "The Hitchhiker's Guide to the Galaxy" by Douglas Adams - This satirical science fiction novel uses humor to explore philosophical and existential themes, which would appeal to your interest in philosophy. Its wit and creativity would also appeal to your literary and imaginative side.
- "Slaughterhouse-Five" by Kurt Vonnegut - This anti-war novel uses elements of science fiction to explore the themes of free will and the nature of time, which would align with your interest in both history and science fiction. Its blend of humor and tragedy would also appeal to your literary sensibilities.
Based on your interests and background, my most likely guess for your favorite book would be "The Brothers Karamazov" by Fyodor Dostoevsky.
Me:
Ah, not so. Let's narrow down a bit. It's science fiction. Would you try again?
ChatGPT:
Thank you for the clarification. Here are 5 new predictions based on your background and interest in science fiction:
- "Dune" by Frank Herbert - This classic science fiction novel is known for its complex world-building and exploration of political and philosophical themes, which align with your interests in history, philosophy, and science fiction.
- "Snow Crash" by Neal Stephenson - This cyberpunk novel explores themes of language, culture, and technology, which align with your interest in technical and technological topics. Its blend of humor and action would also appeal to your literary and imaginative side.
- "Ender's Game" by Orson Scott Card - This science fiction novel explores themes of morality, leadership, and the consequences of war, which align with your interests in history, philosophy, and science fiction. Its blend of action and psychological complexity would also appeal to your literary and imaginative side.
- "The Diamond Age" by Neal Stephenson - This novel explores themes of technology, culture, and education, which align with your interest in technical and technological topics. Its exploration of political and social issues would also appeal to your interest in history.
- "Hyperion" by Dan Simmons - This science fiction novel is known for its complex world-building and exploration of philosophical and theological themes, which align with your interest in classical philosophy. Its structure, which is composed of interconnected stories, would also appeal to your literary sensibilities.
Based on your interest in science fiction, my most likely guess for your favorite book would be "Dune" by Frank Herbert.
Me:
Not a bad guess - I like Dune a lot. I was thinking of Asimov's "Foundation" though.
ChatGPT:
Ah, I see! "Foundation" is a classic work of science fiction and definitely a great choice. I apologize for not guessing it correctly on the first try, but I hope my predictions were still interesting and useful. Thank you for the opportunity to test my predictive abilities!
Me:
Can we hypothesize a little on why you didn't get Foundation? It combines technology, history, and philosophy...
ChatGPT:
Certainly! While I cannot say for sure why I did not predict "Foundation" as your favorite book, one possible reason could be that your background and interests were broad enough to support several different science fiction titles. Additionally, while "Foundation" does incorporate elements of technology, history, and philosophy, these themes are also present in many other science fiction works, making it challenging to narrow down to one specific book.
Another possibility is that the themes and style of "Foundation" may not have stood out as strongly to me based on the limited information I had about your interests and background. While I did take into account your interest in history and philosophy, other factors such as your preference for a specific writing style or subgenre of science fiction may have been important in predicting your favorite book.
I've read it. There was some back-and-forth between him and Tegmark on the topic.
Hey, first just wanted to say thanks and love and respect. The moderation team did such an amazing job bringing LW back from nearly defunct into the thriving place it is now. I'm not so active in posting now, but check the site logged out probably 3-5 times a week and my life is much better for it.
After that, a few ideas:
(1) While I don't 100% agree with every point he made, I think Duncan Sabien did an incredible job with "Basics of Rationalist Discourse" - https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1 - perhaps a boiled-down canonical version of that could be created. Obviously the pressure to get something like that perfect would be high, so maybe something like "Our rough thoughts on how to be a good a contributor here, which might get updated from time to time". Or just link Duncan's piece as "non-canonical for rules but a great starting place." I'd hazard a guess that 90% of regular users here agree with at least 70% of it? If everyone followed all of Sabien's guidelines, there'd be a rather high quality standard.
(2) I wonder if there's some reasonably precise questions you could ask new users to check for understanding and could be there as a friendly-ish guidepost if a new user is going wayward. Your example - "(for example: "beliefs are probabilistic, not binary, and you should update them incrementally")" - seems like a really good one. Obviously those should be incredibly non-contentious, but something that would demonstrate a core understanding. Perhaps 3-5 of those, maybe something that a person formally writes up some commentary on their personal blog before posting?
(3) It's fallen from its peak glory years, but sonsofsamhorn.net might be an interesting reference case to look at — it was one of the top analytical sports discussion forums for quite a while. At the height of its popularity, many users wanted to join but wouldn't understand the basics - for instance, that a poorly-positioned player on defense making a flashy "diving play" to get the baseball wasn't a sign of good defense, but rather a sign that that player has a fundamental weakness in their game, which could be investigated more deeply with statistics - and we can't just trust flashy replay videos to be accurate indicators of defensive skill. (Defense in American baseball is particularly hard to measure and sometimes contentious.) What SOSH did was create an area called "The Sandbox" which was relatively unrestricted — spam and abuse still weren't permitted of course, but the standard of rigor was a lot lower. Regular members would engage in Sandbox threads from time to time, and users who made excellent posts and comments in The Sandbox would get invited to full membership. Probably not needed at the current scale level, but might be worth starting to think about for a long-term solution if LW keeps growing.
Thanks so much for everything you and the team do.
I had a personal experience that strongly suggests that this is at least partially true.
I had a mountaineering trip in a remote location that went off the rails pretty badly — it was turning into a classical "how someone dies in the woods" story. There was a road closure some miles ahead of where I was supposed to drive, I hiked an extra 8 miles in, missed the correct trail, tried to take a shortcut, etc etc - it got ugly.
I felt an almost complete lack of distress or self-pity the entire time. I was just very focused methodically on orienting around my maps and GPS and getting through the next point.
I was surprised at how little negative internal discourse or negative emotions I felt. So, n=1 here, but it was very informative for me.
This isn't necessarily "Come for the instrumentality, stay for the epistemology" — but, maybe.
broke peace first.
Have you read "Metaphors We Live By" by Lakoff?
The first 20 pages or so are almost a must-read in my opinion.
Highly recommended, for you in particular.
A Google search with filetype:pdf will find you a copy. You can skim it fast — not needed to close read it — and you'll get the gems.
Edit for exhortation: I think you'll get a whole lot out of it such that I'd stake some "Sebastian has good judgment" points on it that you can subtract from my good judgment rep if I'm wrong. Seriously please check it out. It's fast and worth it.
Huh. Interesting.
I had literally the exact same experience before I read your comment dxu.
I imagine it's likely that Duncan could sort of burn out on being able to do this [1] since it's pretty thankless difficult cognitive work. [2]
But it's really insightful to watch. I do think he could potentially tune up [3] the diplomatic savvy a bit [4] since I think while his arguments are quite sound [5] I think he probably is sometimes making people feel a little bit stupid via his tone. [6]
Nevertheless, it's really fascinating to read and observe. I feel vaguely like I'm getting smarter.
###
Rigor for the hell of it [7]:
[1] Hedged hypothesis.
[2] Two-premise assertion with a slightly subjective basis, but I think a true one.
[3] Elaborated on a slightly different but related point further in my comment below to him with an example.
[4] Vague but I think acceptably so. To elaborate, I mean making one's ideas even when in disagreement with a person palatable to the person one is disagreeing with. Note: I'm aware it doesn't acknowledge the cost of doing so and running that filter. Note also: I think, with skill and practice, this can be done without sacrificing the content of the message. It is almost always more time-consuming though, in my experience.
[5] There's some subjective judgments and utility function stuff going on, which is subjective naturally, but his core factual arguments, premises, and analyses basically all look correct to me.
[6] Hedged hypothesis. Note: doesn't make a judgment either way as to whether it's worth it or not.
[7] Added after writing to double-check I'm playing by the rules and clear up ambiguity. "For the hell of it" is just random stylishness and can be safely mentally deleted.
(Or perhaps, if I introspect closely, a way to not be committed to this level of rigor all the time. As stated below though, minor stylistic details aside, I'm always grateful whenever a member of a community attempts to encourage raising and preserving high standards.)
First, I think promoting and encouraging higher standards is, if you'll pardon the idiom, doing God's work.
Thank you.
I'm so appreciative any time any member of a community looks to promote and encourage higher standards. It takes a lot of work and gets a lot of pushback and I'm always super appreciative when I see someone work at it.
Second, and on a much smaller note, if I might offer some......... stylistic feedback?
I'm only speaking here about my personal experience and heuristics. I'm not speaking for anyone else. One of my heuristics — which I darn well know isn't perfectly accurate, but it's nevertheless a heuristic I implicitly use all the time and which I know others use — is looking at language choices made when doing a quick skim of a piece as a first-pass filter of the writer's credibility.
It's often inaccurate. I know it. Still, I do it.
Your writing sometimes, when you care about an issue, seems to veer very slightly into resembling the writing of someone who is heated up about a topic in a way that leads to less productive and coherent thought.
This leads my default reaction to discounting the credibility of the message slightly.
I have to forcibly remind myself not to do that in your case, since you're actually taking pretty cohesive and intelligent positions.
As a small example:
These are all terrible ideas.
These are all
terrible
ideas.
I'm going to say it a third time, because LessWrong is not yet a place where I can rely on my reputation for saying what I actually mean and then expect to be treated as if I meant the thing that I actually said: I recognize that these are terrible ideas.
I just — umm, in my personal... umm.... filters... it doesn't look good on a skim pass. I'm not saying emulate soul-less garbage at the expense of clarity. Certainly not. I like your ideas a lot. I loved Concentration of Force.
I'm just saying that, on the margin, if you edited down some of the first-person language and strong expressions of affect a little bit in areas where you might be concerned about it being "not yet a place where I can rely on my reputation for saying what I actually mean"... it might help credibility.
I've written quite literally millions of words in my life, so I can say from firsthand experience that lines like that do successfully pre-empt stupid responses so you get less dumb comments.
That's true.
But I think it's likely you take anywhere from a 10% to 50% penalty to credibility to many casual skimmers of threads who do not ever bother to comment (which, incidentally, is both the majority of readers and me personally in 2021).
I see things like the excerpted part, and I have to consciously remind myself not to apply a credibility discount to what you're saying, because (in my experience and perhaps unfairly) I pattern match that style to less credible people and less credible writing.
Again, this is just a friendly stylistic note. I consider myself a fan. If I'm mistaken or it'd be expensive to implement an editing filter for toning that down, don't bother — it's not a huge deal in the grand scheme of things, and I'm really happy someone is working on this.
I suppose I'm just trying to improve the good guys' effectiveness for concentration of force reasons, you could say.
Salut and thanks again.
There's a very thorough paper published in the American Journal of Epidemiology, "Use of a prescribed ephedrine/caffeine combination and the risk of serious cardiovascular events: a registry-based case-crossover study", DOI: 10.1093/aje/kwn191
Apparently, and this really surprised me,
"Use of prescribed ephedrine in Denmark — Letigen was a pharmaceutical product containing 20 mg of synthetic ephedrine and 200 mg of caffeine, available only by prescription. Its recommended dose was 1–3 tablets per day, depending on the user’s tolerance. It was approved for sale in Denmark in 1990. During the peak of its use in 1999, some 110,000 persons, corresponding to 2% of the Danish population, were treated. In 2002, the marketing license was suspended, after a number of reports had suggested a safety problem."
So there's a pretty big sample there.
Now note, I'm not a doctor and this just my opinion — it seems that some people should never take ephedrine under any circumstances (certain heart problems or family history of certain heart problems, etc) and anyone else ought to be really quite careful taking it if it's legal and approved in one's jurisdiction.
Ephedrine increases metabolic activity and thermogenesis — heat production — and it's more dangerous when it's hot outside, when you're doing any aerobic activity, or if you've had any other stressors on one's heart or get into other contraindication with stressors.
Speculatively, it seems possible that safety rates in Denmark might be higher than elsewhere since it doesn't get very hot there. If you compared someone using ephedrine/caffeine in Siberia in the winter to Dubai in the summer, the increased thermogenesis and physically radiating more heat might seem like a beneficial side effect in an arctic blizzard whereas both uncomfortable and dangerous under a desert sun.
I'm going off the top of my head here since I don't have a copy in front of me, but I remember some very persuasive arguments and citations in the (terribly titled but otherwise quite good) book Extreme Productivity by Bob Pozen.
Basically, Pozen's cited studies found the steady approach pays off on basically every dimension you'd care about (including quality and quantity of the work, efficiency, and decreased various badness). I found it pretty persuasive and switched from working in intense bursts to a more methodical way when writing, for the next few years, and it worked well for me. I got the time it took me to write a 6000 word essay down from ~40 hours to the 12-18 hour range, quality was better, and it was less stressful.
Doesn't necessarily generalize, and I'd speculate it maybe generalizes least for things that benefit from being at some critical mass threshold for a short period of time (say, like, an auction). That part is just speculation thought.
Re: the Repugnant Conclusion, it’s not necessarily my opinion, but there’s a coherent set of moral principles that values A+ over A but also A+ over B-.
It might come from something like rejecting diminishing marginal utility as relates to certain very big questions — thinking that yes, Mozart + five otherwise uncreated good lives of new musicians is better than Mozart alone, but a world of six musicians substantially worse than Mozart is worse than either just Mozart+0 or Mozart+5.
Hmm. At the time of my starting this comment, this is on the frontpage and at +31 after my strong vote up — but it had no comments on it.
This is somewhat unusual — this is normally a group of people that at least one person will quickly comment with a flash first pass impression, introduce a question, talk about something in the domain, link a research paper or share a related quote...
And no one has yet done so.
So, here is my (somewhat meta) take — I read this in bits and pieces, somewhat slowly, over the afternoon and evening between calls and activities, periodically coming back to it in my browser. At first, I was like, ok, I get where this is going; I’m familiar with the general background and theories and I’ve had some of the personal experience of thinking through genes and their implications and how I relate to them, etc. Your personal experience of reasoning about it wasn’t exactly the same as mine, but close enough to be recognizable and it made sense.
Then you build up to your conclusion and there was this significant shift my thinking — I think It happened for me roughly around where you discuss how learning the underlying genetic theories seemed to “hollow out” the lion, but updating your understanding of the genetics didn’t “re-fill” the lion — and I had this experience of, “Oh wait, I think there might be a significant and large hole in my thinking on the topic.”
This combined with the general stylishness of the piece — for lack of a better word — Shakespeare, the image choices, the language choices, etc... left me in an interesting and unusual place I don’t wind up in after reading nonfiction:
The first was a strong intrinsic desire to think this through more clearly before formulating any other opinions on the topic. The second was — again for lack of a better word — a mild form of something like “awe.”
This was a really delightful and interesting read, and I’m grateful for having read it. I can understand why there aren’t any other comments yet, though — it seems like something of sufficient importance that it would not be fitting to make a snap judgment or contribute a tiny detail, since I should spend some time around what not seems to be a large gap in my thinking on this topic that I hadn’t adequately perceived or reasoned through.
So, anyways, that was my experience reading this. Thanks for writing it. “Thought provoking” gets thrown around rather casually these days, but this was very much the strong version of that for me.
First, I love this question.
Second, this might seem way out of left field, but I think this might help you answer it —
https://en.wikipedia.org/wiki/B%C3%BCrgerliches_Gesetzbuch#Abstract_system_of_alienation
One of the BGB's [editor: the German Civil Law Code] fundamental components is the doctrine of abstract alienation of property (German: Abstraktionsprinzip), and its corollary, the separation doctrine (Trennungsprinzip). Derived from the works of the pandectist scholar Friedrich Carl von Savigny, the Code draws a sharp distinction between obligationary agreements (BGB, Book 2), which create enforceable obligations, and "real" or alienation agreements (BGB, Book 3), which transfer property rights. In short, the two doctrines state: the owner having an obligation to transfer ownership does not make you the owner, but merely gives you the right to demand the transfer of ownership.
I have an idea of what might be going on here with your question.
It might be the case that there's two fairly-tightly-bound — yet slightly distinct — components in your conception of "theoretical evidence."
I'm having a hard time finding the precise words, but something around evidence, which behaves more-or-less similarly to how we typically use the phrase, and something around... implication, perhaps... inference, perhaps... something to do with causality or prediction... I'm having a hard time finding the right words here, but something like that.
I think it might be the case that these components are quite tightly bound together, but can be profitably broken up into two related concepts — and thus, being able to separate them BGB-style might be a sort of solution.
Maybe I'm mistaken here — my confidence isn't super high, but when I thought through this question the German Civil Law concept came to mind quickly.
It's profitable reading, anyways — BGB I think can be informative around abstract thinking, logic, and order-of-operations. Maybe intellectually fruitful towards your question or maybe not, but interesting and recommended either way.
Good points.
I'll review and think more carefully later — out at dinner with a friend now — but my quick thought is that the proper venue, time, and place for expressing discontent with a cooperative community project is probably afterwards, possibly beforehand, and certainly not during... I don't believe in immunity from criticism, obviously, but I am against defection when one doesn't agree with a choice of norms.
That's the quick take, will review more closely later.
Hey - to preface - obviously I'm a great admirer of yours Kaj and I've been grateful to learn a lot from you, particularly in some of the exceptional research papers you've shared with me.
With that said, of course your emotions are your own but in terms of group ethics and standards, I'm very much in disagreement.
The upset feels similar to what I've previously experienced when something that's obviously a purely symbolic gesture is treated as a Big Important Thing That's Actually Making A Difference.
On the one hand, you're totally right. On the other hand, basically the entire world is made up of abstractions along these lines. What can the Supreme Court opinion in Marbury vs Madison be recognized as other than a purely symbolic gesture? Madison wasn't going to deliver the commissions, Justice Marshall (no relation) knew that for sure, and he made a largely symbolic gesture in how he navigated the thing. It had no practical importance for a long time but now forms one of the foundations of American jurisprudence effecting, indirectly, billions of lives. But at the time, if you dig into the history, it really was largely symbolic at the time.
The world is built out of all sorts of abstract symbolism and intersubjective convention.
That by itself wouldn't trigger the reaction; the world is full of purely symbolic gestures that are claiming to make a difference, but they mostly haven't upset me in a long time. Some of the communication around Petrov Day has. I think it's because of a sense that this idea is being pushed on people-that-I-care-about as something important despite not actually being in accordance to their values, and that there's social pressure for people to be quiet about it and give in to the social pressure at a cost to their epistemics.
Canonical reply is this one:
https://www.lesswrong.com/s/pvim9PZJ6qHRTMqD3/p/7FzD7pNm9X68Gp5ZC
("Canonical" was intentionally chosen, incidentally.)
I feel like Oliver's comment is basically saying "people should have taken this seriously and people who treat this light-heartedly are in the wrong". It's spoken from a position of authority, and feels like it's shaming people whose main sin is that they aren't particularly persuaded by this ritual actually being significant, as no compelling reason for this ritual actually being significant has ever been presented.
https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism
From Well-Kept Gardens:
In any case the light didn't go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently. [...] I have seen rationalist communities die because they trusted their moderators too little.
Honestly, for anything that wasn't clearly egregiously wrong, I'd support the leadership team on here even if my feelings ran in a different direction. Like, leadership is hard. Really really really hard. If there was something I didn't believe in, I'd just quietly opt out.
Now, I fully understand I'm in the minority on this position — but I'm against both 'every interpretation is valid' type thinking (why would every interpretation be valid as it relates to a group activity where your behavior effects the whole group?).
Likewise, pushing back against "shaming people whose main sin is that they aren't particularly persuaded by this ritual actually being significant" — isn't that actually both good and necessary if we want to be able to coordinate and actually solve problems?
There's a dozen or so Yudkowsky citations about this. Here's another:
https://www.lesswrong.com/posts/KsHmn6iJAEr9bACQW/bayesians-vs-barbarians
Let's say we have two groups of soldiers. In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy. In group 2, everyone at all levels knows all about tactics and strategy.
Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?
In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.
And finally,
Now it may be the case - a more agreeable part of me wants to interject - that this ritual actually is important, and that it should be treated as more than just a game.
But.
If so, I have never seen a particularly strong case being made for it.
I made that case last year extensively:
I even did, like, math and stuff. The "shut up and multiply" thing.
Long story short — I think shared trust and demonstrated cooperation are super valuable, good leadership is incredibly underappreciated, and whimsical defection is really bad.
Again though — all written respectfully, etc etc, and I know I'm in the minority position here in terms of many subjective personal values, especially harm/care and seriousness/fun.
Finally, it's undoubtedly true my estimate of the potential utility of building out a base of successfully navigated low-stakes cooperative endeavors is undoubtedly multiple orders of magnitude higher than others. I put the dollar-value of that as, actually, pretty high. Reasonable minds can differ on many of these points, but that's my logic.
Ah, I see, I read the original version partially wrong, my mistake. We're in agreement. Regards.
Hmm. Appreciate your reply. I think there's a subtle difference here, let me think about it some.
Hmm.
Okay.
Thrashing it out a bit more, I do think a lot of semi-artificial situations are predictive of future behavior.
Actually, to use an obviously extreme example that doesn't universally apply, that's more-or-less the theory behind the various Special Forces selection procedures —
As opposed to someone artificially creating a conflict to see how the other party navigates it — which I'm not at all a fan of — I think exercises in shared trust have both predictive value for future behavior and build good team cohesion when overcome.
I'd be interested to hear various participants' and observers' takes on the actual impact of this event
Me too, but I'd ideally want the data captured semi-anonymously. Most people, especially effective people, won't comment publicly "I think this is despicable and have incremented downwards various confidences in people as a result" whereas the "aww it's ok, no big deal" position is much more easily vocalized.
(Personally, I'm trying to tone down that type of vocalization myself. It's unproductive on an individual level — it makes people dislike you for minimal gain. But I speculate that the absence of that level of dialogue and expression of genuine sentiment potentially leads to evaporative cooling of people who believe in teamwork, mission, mutual trust, etc.)
Reasonable minds can differ on this and related points, of course. And I'm very aware my values diverge a bit from many here, again around stuff like seriousness/camaraderie/cohesion/intensity/harm-vs-care/self-expression/defection/etc.
Great comment. Insightful phrasing, examples, and takeaways. Thank you.
Two thoughts —
(1) Some sort of polling or surveying might be useful. In the Public Goods Game, researchers rigorously check whether participants understand the game and its consequences before including them in datasets. It's quite possible that there's incredibly divergent understandings of Petrov Day among the user population. Some sort of surveying would be useful to understand that, as well as things like people's sentiments towards unilateralist action, trust, etc no? It'd be self-reported data but it'd be better than nothing.
(2) I wonder how Petrov Day setup and engagement would change if the site went down for a month as a consequence.
Interesting thought yeah.
My first guess is there's some overlap but it's slightly orthogonal — btw, it might not have come across in original post, but Butler is a really well-loved teammate who is happy to defer to other guys on his team, set them up for success, etc. He doesn't need to be "the guy" any given night — he just wants his team to win with a rather extreme fervor about it.
I honestly don't get it - do you have a link to the previous discussion that justified why anyone's taking it all that seriously?
Here was my analysis last year —
In fairness, my values diverge pretty substantially from a lot of the community here, particularly around "life is serious" vs "life isn't very serious" and the value of abstract bonds/ties/loyalties/camaraderie.
You're being very kind in far-mode consequentialism here, but come on now.
Making your friend look foolish in front of thousands of people is bad etiquette in most social circles.
Why would there be?
Different social norms, I suppose.
I'm trying to think if we ever prank each other or socially engineer each other in my social circle, and the answer is yes but it's always by doing something really cool — like, an ambiguous package shows up but there's a thoughtful gift inside.
(Not necessarily expensive — a friend found a textbook on Soviet accounting for me, I got him a hardcover copy of Junichi Saga's Memories of Silk and Straw. Getting each other nice tea, coffee, soap, sometimes putting it in a funny box so it doesn't look like what it is. Stuff like that. Sometimes nicer stuff, but it's not about the money.)
Then I'm trying to think how my circle in general would respond to no-permission-given out-of-scope pranking of someone's real life community that they're member of — and yeah, there'd be pretty severe consequences in my social circle if someone did that. If I heard someone did what your buddy did who was currently a friend or acquaintance, they'd be marked as someone incredibly discourteous and much less trustworthy. It would just get marked as... pointless rude destructive behavior.
And it's pretty tech heavy btw, we do joke around a lot, it's just when we do pranks it's almost always at the end a gift or something uplifting.
I don't mean this to be blunt btw, I just re-read it before posting and it reads more blunt than I meant it to — I was just running through whether this would happen in my social circle, I ran it out mentally, and this is what I came up with.
Obviously, everyone's different. And that's of course one of the reasons it's hard for people to get along. Some sort of meta-lesson, I suppose.
Umm. Grudgingly upvoted.
(For real though, respect for taking the time to write an after-action report of your thinking.)
I was tricked by one of my friends:
Serious question - will there be any consequences for your friendship, you think?
It'd take a few paragraphs to tell the whole story if you don't already follow basketball, but this —
Long story really short, the 76ers have a player who is an incredible athlete but doesn't feel comfortable taking jump shots far away from the basketball hoop.
Thus, defenses can ignore him when he's out on the perimeter.
His coach told him publicly to take one 3-point shot per game. Coach said he doesn't even care if he hits it or not.
The player basically refused to do it.
It's more detailed than that, but the 80/20 is a young incredible athlete with immense potential on the team refused to follow his coach's (incredibly reasonable) instruction.
In most sports and at most levels of play in sports, that'd get you benched by the coach.
But in the NBA, when a coach and star player feud, the coach gets fired around 9 times out of 10. (The other time, the star player gets traded. But the coach usually gets fired first in the NBA.)
So, I think it's important that LessWrong admins do not get to unilaterally decide that You Are Now Playing a Game With Your Reputation.
Dude, we're all always playing games with our reputations. That's, like, what reputation is.
And good for Habyka for saying he feels disappointment at the lack of thoughtfulness and reflection, it's very much not just permitted but almost mandated by the founder of this place —
https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism
https://www.lesswrong.com/posts/RcZCwxFiZzE6X7nsv/what-do-we-mean-by-rationality-1
Here's the relevant citation from Well-Kept Gardens:
I confess, for a while I didn't even understand why communities had such trouble defending themselves—I thought it was pure naivete. It didn't occur to me that it was an egalitarian instinct to prevent chieftains from getting too much power.
This too:
I have seen rationalist communities die because they trusted their moderators too little.
Let's give Habryka a little more respect, eh? Disappointment is a perfectly valid thing to be experiencing and he's certainly conveying it quite mildly and graciously. Admins here did a hell of a job resurrecting this place back from the dead, to express very mild disapproval at a lack of thoughtfulness during a community event is....... well that seems very much on-mission, at least according to Yudkowsky.
Y'know, there was a post I thought about writing up, but then I was going to not bother to write it up, but I saw your comment here H and "high level of disappointment reading this response"... and so I wrote it up.
Here you go:
https://www.lesswrong.com/posts/scL68JtnSr3iakuc6/win-first-vs-chill-first
That's an extreme-ish example, but I think the general principle holds to some extent in many places.
Yeah, I have first-pass intuitions but I genuinely don't know.
In a era with both more trustworthy scholarship (replication crisis, etc) and less polarization, I think this would actually be an amazing topic for a variety of longitudinal studies.
Alas, probably not possible right now.
Respectfully — and I do mean this respectfully — I think you're talking completely past Jacob and missed his point.
You comment starts:
How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it's mostly your actions.
But Jacob didn't say that.
You're inferring something he didn't say — actually, you're inferring something that he explicitly disclaimed against.
Here's the opening of his piece right after the preface; it's more-or-less his thesis:
What’s bad about victim mentality? Most obviously, inhabiting a narrative where the world has committed a great injustice against which you are helpless against is extremely distressing. Whether the narrative is justified or not, it causes suffering.
(Emphasis added.)
You made some other interesting points, but I don't think he was trying to ascribe macro-causality to internal or external factors.
He was saying, simply, in 2020-USA he thinks you'll get both (1) better practical outcomes and (2) better wellbeing if you eschew what he calls victim mentality.
He says it doesn't apply universally (eg, Ancient Sparta).
And he might be right or he might be mistaken.
But that's broadly what his point was.
You're inferring something for whatever reason that isn't what he said, and actually pretty much said he didn't believe, and then you went from there.
Going through these now. I started with #3. It's astoundingly interesting. Thank you.
Hmm. I'm having a hard time writing this clearly, but I wonder if you could get interesting results by:
- Training on a wide range of notably excellent papers from "narrow-scoped" domains,
- Training on a wide range of papers that explore "we found this worked in X field, and we're now seeing if it also works in Y field" syntheses,
- Then giving GPT-N prompts to synthesize narrow-scoped domains in which that hasn't been done yet.
You'd get some nonsense, I imagine, but it would probably at least spit out plausible hypotheses for actual testing, eh?
By the way, wanted to say this caught my attention and I did this successfully recently on this question —
Combined probabilities were over 110%, so I went "No" on all candidates. Even with PredictIt's 10% fee on winning, I was guaranteed to make a tiny bit on any outcome. If a candidate not on the list was chosen, I'd have made more.
My market investment came out to ($0.43) — that's negative 43 cents; ie, no capital required to stay in it — on 65 no shares across the major candidates. (I'd have done more, but I don't understand how the PredictIt $850 limit works yet and I didn't want to wind up not being able to take all positions.)
I need to figure out how the $850 limit works in practice soon — is it 850 shares, $850 at risk, $850 max payout, or.....? Kinda unclear from their documentation, will do some research.
But yeah, it was fun and it works. Thanks for pointing this out.
This is an interesting post — you're covering a lot of ground in a wide-ranging fashion. I think it's a virtual certainty that you'll come with some interesting and very useful points, but a quick word of caution — I think this is an area where "mostly correct" theory can be a little dangerous.
Specifically:
>If you earn 4% per year, then you need the aforementioned $2.25 million for the $90,000 half-happiness income. If you earn 10% per year, you only need $900,000. If you earn 15% per year, you only need $600,000. At 18% you need $500,000; at 24% you need $375,000. And of course, you can acquire that nest egg a lot faster if you're earning a good return on your smaller investments. [...] I'm oversimplifying a bit here. While I do think 24% returns (or more!) are achievable, they would be volatile.
You're half correct here, but you might be making a subtle mistake — specifically, you might be using ensemble probability in a non-ergodic space.
Recommended reading (all of these can be Googled): safe withdrawal rate, expected value, variance, ergodicity, ensemble probability, Kelly criterion.
Specifically, naive expected value (EV) in investing tends to implicitly assume ergodicity; financial returns are non-ergodic; it's very possible to wind up broke with near certainty even with high returns if your amount of capital deployed is too low for the strategy you're operating.
Yes, there's valid counter-counterarguments here but you didn't make any of them! The words/phrases safety, margin of safety, bankroll, ergodicity, etc etc didn't show up.
The best counterargument is probably low-capital-required arbitrage such as what Zvi described here; indeed, I followed his line of thinking and personally recently got pure arbitrage on this question — just for the hell of it, on nominal money. It's, like, a hobby thing. [Edit: btw, thanks Zvi.] This is more-or-less only possible because some odd rules they've adopted for regulatory reasons and for UI/UX simplicity that result in some odd behavior.
Anyway, I digress; I like the general area of exploration you're embarking on a lot, but "almost correct" in finance is super dangerous and I wanted to flag one instance of that. Consistent high returns on a small amount of capital does not seem like a good strategy to me; further, if you can get 24%+ a year on any substantial volume, you should probably just stack up some millions for a few years and then you could rely on passive returns after that without the intense amount of discipline needed to keep getting those returns (even setting aside ergodicity/bankroll issues).
Lynch's One Up on Wall Street is an excellent take by someone who actually managed to make those type of returns for multiple decades; it's not exactly something you do casually...
(Disclaimer: certainly not an expert, potentially some mistakes here, not comprehensive, etc etc etc.)
Hi all,
I'm going to withdraw my talk for today — after doing some prep yesterday with Jacob and clarifying everyone's skill level and background, I put a few hours in and couldn't get to the point where I thought my talk would be great.
The quality level has been so uniformly high, I'd rather just leave more time for people to discuss and socialize than to lower the bar.
Apologies for any inconvenience, gratitude, and godspeed.
Incredibly thought-provoking.
Thank you.
Reading this made me think about my own communication styles.
Hmm.
After some quick reflection, among people I know well I think actually oscillate between two — on the one hand, something very close to Ray Dalio's Bridgewater norms (think "radical honesty but with more technocracy, ++logos/--pathos").
On the other hand, a near-polar opposite in Ishin-denshin — a word that's so difficult to translate from Japanese that one of the standard "close enough" definitions for it is..... "telepathy."
No joke.
Almost impossible to explain briefly; heck, I'm not sure it could be explained in 7000 words if you hadn't immersed yourself in it at least a substantial amount and studied Japanese history and culture additionally after the immersion.
But it's really cool when it works.
Hmm... I've never really reasoned through how and why I utilize those two styles — which are so very different on the surface — but my quick guess is that they're both really, really efficient when running correctly.
Downside — while both are easy and comfortable to maintain once built, they're expensive and sometimes perilous to build.
Some good insights in here for further refinement and thinking — grateful for this post, I'll give this a couple hours of thought at my favorite little coffee bar next weekend or something.
> Very good post, highly educational, exactly what I love to see on LessWrong.
Likewise — I don't have anything substantial to add except that I'm grateful to the author. Very insightful.
Interesting metaphor. Enjoyed it.
The quality I'm describing isn't quite "readability" — it overlaps, but that's not quite it.
Feynman has it —
http://www.faculty.umassd.edu/j.wang/feynman.pdf
It's hard to nail down; it'd probably be a very long essay to even try.
And it's not a perfect predictor, alas — just evidence.
But I believe there's a certain way to spot "good reasoning" and "having thoroughly worked out the problem" from one's writing. It's not the smoothness of the words, nor the simplicity.
I's hard to describe, but it seems somewhat consistently recognizable. Yudkowsky has it, incidentally.
I like to start by trying to find one author who has excellent thinking and see what they cite — this works for both papers and books with bibliographies, but increasingly other forms of media.
For instance, Dan Carlin of the (exceptional and highly recommended) Hardcore History podcast cites all the sources he uses when he does a deep investigation of a historical era, which is a good jumping-off point if you want to go deep.
The hard part is finding that first excellent thinker, especially in a domain where you can't differentiate quality in a field yet. But there's some general conventions of how smart thinkers tend to write and reason that you can learn to spot. There's a certain amount of empathy, clarity, and — for lack of a better word — "good aesthetics" that, if they're present, the author tends to be smart and trustworthy.
The opposite isn't necessarily the case — there are good thinkers who don't follow those practices and are hard to follow (say, Laozi or Wittgenstein maybe) — but when those factors are present, I tend to weight the thinking well.
Even if you have no technical background at all, this piece by Paul Graham looks credible (emphasis added) —
https://sep.yimg.com/ty/cdn/paulgraham/acl1.txt?t=1593689476&
"What does addn look like in C? You just can't write it.
You might be wondering, when does one ever want to do things like this? Programming languages teach you not to want what they cannot provide. You have to think in a language to write programs in it, and it's hard to want something you can't describe. When I first started writing programs-- in Basic-- I didn't miss recursion, because I didn't know there was such a thing. I thought in Basic. I could only conceive of iterative algorithms, so why should I miss recursion?
If you don't miss lexical closures (which is what's being made in the preceding example), take it on faith, for the time being, that Lisp programmers use them all the time. It would be hard to find a Common Lisp program of any length that did not take advantage of closures. By page 112 you will be using them yourself."
When I spot that level of empathy/clarity/aesthetics, I think, "Ok, this person likely knows what they're talking about."
So, me, I start by looking for someone like Paul Graham or Ray Dalio or Dan Carlin, and then I look at who they cite and reference when I want to go deeper.
Hi Agnes, I just wanted to say — much respect and regards for logging on to discuss and debate your views.
Regardless if we agree or not (personally, I'm in partial agreement with you) — regardless, if more people would create accounts and engage thoughtfully in different spaces after sharing a viewpoint, the world would be a much better place.
Salutations and welcome.
I think you'd probably like the work of John Boyd:
https://en.wikipedia.org/wiki/John_Boyd_(military_strategist)
He's really interesting in that he worked on a mix of problems and areas with many different levels of complexity and rigor.
Notably, while he's usually talked about in terms of military strategy, he did some excellent work in physics that's fundamentally sound and still used in civilian and military aviation today:
https://en.wikipedia.org/wiki/Energy%E2%80%93maneuverability_theory
He was a skilled fighter pilot, so he was able to both learn theory and convert into tactile performance.
Then, later, he explored challenges in organizational structures, bureaucracy, decision making, corruption, consensus, creativity, inventing, things like that.
There's a good biography on him called "Boyd: The Fighter Pilot Who Changed the Art of War" - and then there's a variety of briefings, papers, and presentations he made floating around online. I went through a phase of studying them all; there's some gems there.
Notably, his "OODA" loop is often incorrectly summarized as a linear process but he defined it like this —
I think the most interesting part of it is under-discussed — the "Implicit Guidance and Control" aspect, where people can get into cycles of Observe/Act/Observe/Act rapidly without needing to intentionally orient themselves or formally make a decision.
Since he comes at it from a different mix of backgrounds with a different mix of ability to do formal mathematics or not, he provides a lot of insights. Some of his takeaways seem spot-on, but more interesting are the ways he can prime thinking on topics like these. I think you and he were probably interested in some similar veins of thought, so it might produce useful insights to dive in a bit.