Careless thinking: A theory of bad thinking
post by Nathan Young · 2024-12-17T18:23:16.140Z · LW · GW · 17 commentsThis is a link post for https://nathanpmyoung.substack.com/p/be-more-katja?r=18mntl&utm_campaign=post&utm_medium=web
Contents
Tl;dr: Theory: Evidence: What about evidence against the theory? But mostly I am thinking about my own experience. If the above is true, what is going on? First some mechanistic theories: And next some theories on why I personally don’t try harder: Times when I manage serious thinking Assuming it’s true, how can we avoid inattentive thinking on important topics? Where do I see this in practice In conclusion: None 17 comments
Have you ever noticed how differently we approach buying a car versus choosing what to watch on Netflix? One might involve spreadsheets, research, and asking friends for advice. The other? We might just click on whatever catches our eye. I argue a lot of our thinking is more like the second than the first. This argument was originally made to me by Katja Grace, but if you don’t like it it’s probably my fault.
When people take something seriously – like their weight, their children's education, or a major purchase – they may become amateur researchers. They may dig through studies, compare options, and carefully weigh advice from trusted sources. But when it comes to leisure activities? Personally, I’m much more likely to go with the flow. Pick something up and see how it feels. I could do a cost benefit analysis of an hour of television, but instead I’ll start watching and see how it feels.
When you do your most important thinking do you take it seriously, or not?
Tl;dr:
- This post sets out a theory, not a fait accomplis
- Most of our thinking, even on important topics, resembles casual Netflix browsing more than careful car-buying research.
- We often default to quick, inattentive thinking rather than careful deliberation.
- This pattern is broad - from Twitter discussions to Democratic party decision making
- I look into the evidence for or against
- I give a number of theories for this eg Kanheman’s idea that deliberate thinking is taxing or Caplan's theory that we invest mental energy only when we actually think we well benefit
- I give a number of suggestions eg trying to spend less time more deliberately, or using LLMs for feedback
Theory:
Much thinking is inattentive and error prone, without updating, including for very important topics.
Longform articles, reports, government decision making, twitter interactions. My theory is that in all these places most thinking is more like “getting it done” than careful sober minded decision making.
Evidence:
What evidence is there of this?
Consider what twitter (X) is like. People are always making quick, bad arguments. I was surprised how few people seemed to know that Trump had set up fake slates of electors (even among Kamala supporters). There is a lack of seriousness to a lot of discussion there, including my own. What % of people’s time is spent in discussions where they might genuinely change their mind? 5%? 1%?
Next, consider Philip Tetlock’s work on forecasting. His team of superforecasters managed to beat a team of intelligence experts who had secret information. Part of that was the skill of his forecasters, but also he attributes their success to their attitude of constantly wanting to get better (what he calls “perpetual beta”, I think). In some sense at least the forecasters were striving more for the right answer than the well paid intelligence experts with secret info.
Even some quite serious spaces seem to be missing serious thinking on lots of topics. I am surprised by the lack of deep tech or parenting discussion on LessWrong. And many topics on the EA forum seem under-discussed - individual profiles of other philanthropies, personal estimates of charities, animal welfare estimates, etc.
What about evidence against the theory?
Some people seem capable of lots of careful thinking, even in throwaway moments. On Twitter, it’s people like Stefan Schubert, Joshua Blake or Katja Grace, but I have met 10s of people like this.
This paper suggests to me that there is far more focus on preferences than I might imagine. It asks people to score on a 7 point scale questions like “It is important that the [art/charity/investments] I choose reflects my personal tastes or values.” Across 100 university respondents, the answers are less objective than I would expect, particularly for charity, but also for investments and medical treatments. This suggests that in other decisions where thinkings seems poor, people may instead be attempting to satisfy alternative preferences that I cannot see.
Having thought about the above more, I think “accuracy isn’t a top priority” is a better theory than the one expressed here, but if I don’t publish this now it will probably be months.
But mostly I am thinking about my own experience.
On twitter, I fire off quick replies to tweets where I know the "strong" counterpoint. I have passing conversations where I dismiss potentially valid evidence because I can't process it properly. I take positions without really considering the full picture. In a footnote, I describe these behaviours with more introspection[1].
There are rare times when I actually think carefully. Sometimes my friend @CharlesD [LW · GW], a well-calibrated forecaster, will challenges my view: I know he’s often right when we disagree, so I'll examine which of my considerations might be wrong[2]. Or if I'm with @KatjaGrace [LW · GW][3], who is practically unable to take sides, I feel very self-conscious when I’m just trying to hold a position.
Having noticed this in myself, I see this kind of behaviour in others. On Twitter I don’t expect most people to change their minds on engaging. I do not expect many experts and communities to have well-calibrated answers to the questions they claim as a top priority.
In short, the world makes more sense if many people are engaging in these behaviours.
If the above is true, what is going on?
First some mechanistic theories:
Daniel Kanneman’s Thinking Fast and Slow, suggests deliberate thinking is tiring and our brains avoid doing so where possible. It’s much easier to do heuristic based thinking (which he calls “system 1”).
Bryan Caplan offers another perspective in "The Myth of the Rational Voter." He suggests we approach political thinking through a lens of personal consequence and personal convenience. When we truly believe that our vote matters, we think carefully about it. So for much thinking – from economic policy to company norms – most people correctly believe their vote won't change anything. So despite the topics' importance, we don't invest the mental energy to think them through properly.
A related signalling approach might be put forward by Robin Hanson. The imaginary Hanson in my brain says something like “People are doing serious thinking, but on topics of what matters for them, and for most people that’s status, not the topic at hand”. Under this view, people do care deeply, but they care about staying in their tribe, pleasing their boss, not rocking the boat. And on aggregate this leads to a lot of bad thinking.
Next, I find the notion of “aliveness” useful[4]. I somehow find it much more alive/tasty/desirable to have a quick spar with someone than to slowly figure stuff out. To compare the strength of our quips. I am a creature searching for sex, calories and rock and roll and this is no different. Somehow I find simple thinking much more attractive than being careful.
Finally there is some mix of the principal-agent problem and how we aggregate preferences. Most of us are trying to appease many people who aren’t closely connected to the work we do. It is both hard to know what they would want and hard to actually ensure that our incentives are towards doing that thing. Politicians might want to do a good job, but it’s hard to know what that looks like and their incentives are often towards not upsetting their voters.
And next some theories on why I personally don’t try harder:
Most prominently, I think I identify as someone who “does thinking”. And that accounts for a lot of bad behaviour. I’m a thoughtful person, how could I be engaged in performative or low quality thinking, If there is some reason that I might not be thinking carefully, it butts up against this identity and that alone gives it reason to be ignored.
Next there is busyness. I often have a long to-do list of things but I enjoy a political back and forth. Much thinking gets relegated to a sort of “smart-casual” mode. I pretend it’s work but take an attitude of distraction. But in turn this means that I don’t really focus on the matter at hand. And if I might change my mind and don’t want to.. Suddenly that’s a good time to get back to work.
Third is status. I have a pretty large twitter account at this point. I get lots of responses to my tweets. And so whenever I want to not have to respond to one, I can easily say “Oh I can’t respond to everyone!”[5].
All of these three are stories I tell myself that allow justification of thinking as leisure. Whenever I might change my mind, I can instead tell one of these stories and not have to. I’m a thoughtful person! I’m too busy! I can’t respond to everyone.
Times when I manage serious thinking
First, when my life or energy depends on it. If I really want a job or to impress someone, suddenly the good thinking becomes easier. I am sure that if I had a child who was sick with an unknown disease I would take it very seriously indeed..
Beyond that, I think a key insight here is that I have to make space for non-leisure thinking. If I want to think about something carefully. I might have to put times aside for it. I’ve heard Benjamin Todd books time in a hotel to write. Locks himself in a room away from distractions. That’s a far cry from twitter.
I can also improve my incentives. When I am forecasting, that’s sort of political. But I know that I want a good score (or to make money on bets, etc). Somehow I have really learned that I need to take time, write considerations, etc. Wobbly thinking doesn’t cut it here. I guess it’s pretty true in blog writing too, which is why I’m trying to blog more and tweet less.
Assuming it’s true, how can we avoid inattentive thinking on important topics?
Ask an LLM. Ask “what are the basic steps for doing X”. And submit your draft for its feedback. LLMs are like a wise, careful friend, who wants to spend a lot of time on your work. They will read a whole draft in seconds, find errors and inconsistencies, all for the cost of a $20 a month subscription. I recommend it[6].
Notice inattention. I am a pretty inattentive person. A woman I dated used to hate when I was on my phone when we hung out. But I justified it as part of my ADHD. Well I was wrong. Even if it wasn’t my fault8, I could do something to fix it. Similarly, I did not create this complex world, but I still have to interact with it as it is and that requires focus and care. I have started to notice when I want to be inattentive. Often I am anxious and want something to smear my consciousness across so as to avoid noticing the pain. I find noticing valuable.
Do less, better thinking. For me, a big issue is that my thinking is smeared across so many topics. For minutes or even seconds each. And then I get tired and snippy. I’ve been trying to write more longform, but that necessarily means I cover fewer issues. But that’s okay, most of what Twitter cares about from week to week doesn’t matter.
Create space for the most important thinking. If I really care about thinking about something, I might take time off, create a calm, low-stress environment. What is the most important thing in your life? Have you taken a day of work to consider how to better orient to that thing? If not, why?
Write something for friends. I might write a blog (like this one!) and share it with friends to try and get their view on it. I want to respect my friends’ time so I’ll try to make sure that the document is carefully written.
Notice when I am not doing these things. These days I try and use these as triggers. If I can’t be bothered to get some time alone or write a discussion up, do I actually want to think about it? Or do I just want an argument.
Where do I see this in practice
I criticise these organisations as my friends[7].
LessWrong and rationalists more generally seem to lack focus on building a track record. It has been clear to them for decades that with forecasting and prediction markets one can show who tends to be good at thinking and who isn’t. Why isn’t there a publicly accessible track record for top rationalists? Is Eliezer Yudkowsky good at predicting things? He’s made a few good trades on Manifold, but doesn’t the rationalist diaspora deserve a better system than this? What about Hanson, Zvi? I expect Habryka to argue that the LessWrong review is more accurate than forecasting track record, but I disagree.
More broadly, rationalists seem to me not focused enough on geopolitics and investing. Are we good at making global plays around AI? I don’t know, but I would be much more confident if I knew how good we were at predicting other important shifting global problems. Where is the rationalist view on Taiwan?[8]I think a fair criticism here is “Nathan, be the change you want to see”, but still, why aren’t we here already[9].
Likewise, in EA I am surprised how hard it is to find numbers for many things, let alone medians of several respected people on said topics. Is there an easy way to see animal welfare ranges, cost effectiveness numbers, comparable cause prioritisations, website views? Not that I’ve seen. It is to Open Philanthropy’s credit that most of their funding is easily searchable and that websites like OpenBook can exist. But this seems to be the exception, rather than the rule.
I found the EA poise towards SBF pretty unserious too. I figured that someone was watching the billionaires more than the questions I was putting on Manifold. But it seemed that no one was coordinating. We see how that turned out. More generally, while rationalism might focus too much on new mechanisms and hence achieve too little, sometimes it seems EA wants to change the world without changing itself.
I don’t read Progress Studies closely enough to know, but it feels a bit like they haven’t really thought about AI risk, perhaps because they don’t want that vibe or because they feel EA has it covered. This feels pretty unserious, notably because progress on benefits vs costs can be asymmetric (and we should want it to be).
I was heartened to see the Democrats dump Biden, who was probably going to lose. But they took far too long to do so and replaced him with Harris, who wasn’t much better. Somehow the leadership race for the most powerful country on earth became a self-congratulatory mass deception. I would have voted for Harris, but I do not feel bad that Democrats were punished for this.
In conclusion:
If I say that there is a thinking I do when I watch television, you hopefully know what I mean. Unfocused, unserious, unimportant. The question is whether much of my thinking looks like that or something more deliberate. And if the former, perhaps I should change.
- ^
Often when reading twitter I sort of open many browser tabs in my brain. I think it would be better if I sat and read one, but I just scroll, smearing my focus over many things. I am already in a sort of semi-focused stat.
And so to jerk me out of that into writing replies, which I find tiresome, something has to feel particularly good, like using an argument I think is clever, correcting someone with a strong reply, getting into a discussion about something I think is important.
But I rarely have the energy to actually follow up with new research or actual thinking. If I did I probably would have read something more thoroughly earlier. - ^
Sorry.
- ^
I write more about my feelings about Katja here - Be More Katja [LW · GW]
- ^
I don't know where this is from but it comes in my sort of meditation and spiritual circles. "What is alive to you?" is a surprisingly good prompt.
- ^
This is kind of snotty and I don't endorse it, but I think I probably do think it. If you recoil from thoughts like this, I recommend asking if you too have them (though maybe you are just a kinder person than me!)
- ^
Though they are not friends. They are black boxes. I would not come to rely on them for emotional regulation. Future models may have their own goals that are different to mine.
- ^
It was probably my fault.
- ^
It is the same standard I hold myself to. You may give me feedback here.
- ^
This page is kind of an overview, but it was written by me https://www.lesswrong.com/tag/china [? · GW]
- ^
I have some weak theory that the problem here is that Rationalists like transparency more than they like accuracy. Often I would prefer the cryptic pronouncement of someone at the top of manifold’s leaderboard than a 10,000 word piece about some LessWronger's introspection.
17 comments
Comments sorted by top scores.
comment by Gunnar_Zarncke · 2024-12-17T21:05:40.936Z · LW(p) · GW(p)
Much of marketing and sales is intended to make us think fast and go by intuition. The former by using deadlines and the latter by appealing to emotions and how we'd feel about the decision. Or by avoiding the decision altogether, e.g., by making us think past the sale or creating situations where each outcome is a win.
comment by Logan Riggs (elriggs) · 2024-12-18T14:58:30.326Z · LW(p) · GW(p)
I also have a couple friends that require serious thinking (or being on my toes). I think it's because they have some model of how something works, and I say something, showing my lack of this model.
Additionally, programming causes this as well (in response to compilation errors, getting nonsense outputs, or runs too long).
Replies from: Nathan Young↑ comment by Nathan Young · 2024-12-19T10:17:30.025Z · LW(p) · GW(p)
Yes, this is one reason I really like forecasting. I forces me to see if my thinking was bad and learn what good thinking looks like.
comment by Ninety-Three · 2024-12-18T13:00:38.414Z · LW(p) · GW(p)
Having thought about the above more, I think “accuracy isn’t a top priority” is a better theory than the one expressed here, but if I don’t publish this now it will probably be months.
I like how this admission supports the "accuracy isn't a top priority" theory.
Replies from: Nathan Young↑ comment by Nathan Young · 2024-12-19T10:13:45.500Z · LW(p) · GW(p)
Do you mean this as a rebuke?
I feel a little defensive here, because I think the acknowledgement and subsequent actions were more accurate and information preserving than any others I can think of. I didn't want to rewrite it, I didn't want to quickly hack useful chunks out, I didn't want to pretend I thought things I didn't, I actually did hold these views once.
If you have suggestions for a better course of action, I'm open.
Replies from: Ninety-Three↑ comment by Ninety-Three · 2024-12-19T12:19:34.076Z · LW(p) · GW(p)
I mean this as agreement with the "accuracy isn’t a top priority" theory, plus an amused comment about how the aside embodies that theory by acknowledging the existence of a more accurate theory which does not get prioritized.
Replies from: Nathan Young↑ comment by Nathan Young · 2024-12-19T12:30:16.862Z · LW(p) · GW(p)
Sure, but again to discuss what really happened, it wasn't that it wasn't prioritised, it was that I didn't realise it until late into the process.
That isn't prioritisation, in my view, that's halfassing. And I endorse having done so.
comment by ChristianKl · 2024-12-19T10:10:23.032Z · LW(p) · GW(p)
I don't think that Democrats decision making about Biden is in the same class as individual decision making. People around Biden had a lot of political power and people who challenged that power could lose their careers for not being a team player.
It needed the poor debate performance for powerful people to feel like the could get away with calling for Biden to step down without paying a huge price. It's no sign that people weren't thinking.
Replies from: Nathan Young↑ comment by Nathan Young · 2024-12-19T10:16:50.967Z · LW(p) · GW(p)
I think it caused them to have much less time to choose a candidate and so they chose a less good candidate than they were able to.
If thinking is the process of coming to conclusions you reflectively endorse, I think they did bad thinking and that in time people will move to that view.
Thinking is about choosing the action that actually wins, not the one that is justifiable by social reality, right?
Replies from: ChristianKl, Ninety-Three↑ comment by ChristianKl · 2024-12-19T10:27:35.984Z · LW(p) · GW(p)
Dean Philipps didn't win. I think Cenk Uygar got defunded.
If somebody does not pick a fight that's costly to them, that's no sign of careless thinking.
Replies from: Nathan Young↑ comment by Nathan Young · 2024-12-19T10:33:12.159Z · LW(p) · GW(p)
I mean the Democratic party insiders who resisted the idea that Biden was unsuitable for so long and counselled him to stay when he was pressed. I think those people were thinking badly.
Or perhaps I think they were thinking more about their own careers than the next administration being Democrat.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-12-19T11:40:02.582Z · LW(p) · GW(p)
What evidence do you have for the claim that major Democratic party insiders counseled him to stay?
↑ comment by Ninety-Three · 2024-12-19T12:26:45.170Z · LW(p) · GW(p)
If there was a unified actor called The Democrats that chose Biden, it chose poorly sure. But it seems very plausible that there were a bunch of low-level strategists who rationally thought "Man, Biden really shouldn't run but I'll get in trouble if I say that and I prefer having a job to having a Democratic president" plus a group of incentive-setters who rationally thought they would personally benefit more from creating the conditions for that behaviour than from creating conditions that would select the best candidate.
It's not obvious to me that this is a thinking carefully problem and not a principal-agent problem.
↑ comment by Nathan Young · 2024-12-19T12:28:59.636Z · LW(p) · GW(p)
Or a coordination problem.
I think coordiantion problems are formed from many bad thinkers working together.
comment by trevor (TrevorWiesinger) · 2024-12-19T00:59:18.366Z · LW(p) · GW(p)
If you converse directly with LLMs (e.g. instead of through a proxy or some very clever tactic I haven't thought of yet), which I don't recommend especially not describing how your thought process works, one thing to do is regularly ask it "what does my IQ seem like based on this conversation? I already know this is something you can do. must include number or numbers".
Humans are much smarter and better at tracking results instead of appearances, but feedback from results is pretty delayed, and LLMs have quite a bit of info about intelligence to draw from. Rather than just IQ, copy-pasting stuff like paragraphs describing concepts like thinkoomph are great too, but this post seems more like something you wouldn't want to exclude from that standard prompt.
One thing that might be helpful is the neurology of executive functioning. Activity in any part of the brain suppresses activity elsewhere; on top of reinforcement, this implies that state and states are one of the core mechanisms for understanding self-improvement and getting better output.
comment by t14n (tommy-nguyen-1) · 2024-12-17T20:46:12.016Z · LW(p) · GW(p)
re: public track records
I have a fairly non-assertive, non-confrontational personality, which causes me to defer to "safer" strategies (e.g. nod and smile, don't think too hard about what's being said, or at least don't vocalize counterpoints). Perhaps others here might relate. These personality traits are reflected in "lazy thinking" online -- e.g. not posting even when I feel like I'm right about X, not sharing an article or sending a message for fear of looking awkward/revealing a preference about myself that others might not agree with.
I notice that people who are very assertive and/or competitive, who see online discussions as "worth winning", will be much more publicly vocal about their arguments and thought process. Meek people (like me), may not see the worth in undertaking the risk of publicly revealing arguments or preferences. Embarrassment, shame, potentially being shunned for your revealed preferences, and so on -- there are many social risks to being public with your arguments and thought process. And if you don't value the "win" in the public sphere, why take on that risk?
Perhaps something that holds people back from publishing more is that many people tie their offline identity to their online identities. Or perhaps it's just a cultural inclination -- maybe most people are like me and don't value the status/social reward of being correct and sharing about it.
It's enough to be privately rigorous and correct.
Replies from: stavros↑ comment by stavros · 2024-12-18T13:11:22.443Z · LW(p) · GW(p)
Meek people (like me), may not see the worth in undertaking the risk of publicly revealing arguments or preferences. Embarrassment, shame, potentially being shunned for your revealed preferences, and so on -- there are many social risks to being public with your arguments and thought process
2 of the 3 'risks' you highlighted are things you have control over; you are an active participant in your feelings of shame and embarrassment[1], they are strategies 'parts' of you are pursuing to meet your needs, and through inner work[2][3] you can stop relying on these self-limiting strategies.
The 3rd is a feature, not a bug. By and large, anyone who would shun you in this context is someone you want to be shunned by; someone who really isn't worth your time and energy.
The obvious exceptions are for those who find themselves in hostile cultures where revealing certain preferences poses the risk of literal harm.
Epistemic status: assertive/competitive, status blind autist who is having a great time being this way and loves convincing others to dip their toe in the water and give it a try; you might just find yourself enjoying it too :)