How can we increase the frequency of rare insights?

post by MichaelDickens · 2021-04-19T22:54:03.154Z · LW · GW · 10 comments

Contents

  Incremental progress vs. sudden insights
  Some meditations on the nature of insights
    Feynman on finding the right psychological conditions
    P vs. NP
    johnswentworth on problems we don't understand
  Learning how to increase the frequency of insights
None
10 comments

In many contexts, progress largely comes not from incremental progress, but from sudden and unpredictable insights. This is true at many different levels of scope—from one person's current project, to one person's life's work, to the aggregate output of an entire field. But we know almost nothing about what causes these insights or how to increase their frequency.

Incremental progress vs. sudden insights

To simplify, progress can come in one of two ways:

  1. Incremental improvements through spending a long time doing hard work.
  2. Long periods of no progress, interspersed with sudden flashes of insight.

Realistically, the truth falls somewhere between these two extremes. Some activities, like theorem-proving, look more like the second case; other activities, like transcribing paper records onto a computer, look more like the first. When Andrew Wiles proved Fermat's Last Theorem, he had to go through the grind of writing a 200-page proof, but he also had to have sparks of insight to figure out how to bridge the missing gaps in the proof.

The axis of incremental improvements vs. rare insights is mostly independent of the axis of easy vs. hard. A task can be sudden and easy, or incremental and hard. For example:[1]

incremental work sudden insights
easy algebra homework geometry homework
hard building machine learning models proving novel theorems

Insofar as progress comes from "doing the work", we know how to make progress. But insofar as it comes from rare insights, we don't know.

Some meditations on the nature of insights

Why did it take so long to invent X?

Feynman on finding the right psychological conditions

Physicist Richard Feynman talks about this in Take the World from Another Point of View:

I worked out the theory of helium, once, and suddenly saw everything. I'd been struggling, struggling for two years, and suddenly saw everything at one time. [...] And then you wonder, what's the psychological condition? Well I know at that particular time, I simply looked up and I said wait a minute, it can't be quite that difficult. It must be very easy. I'll stand back, I'll treat it very lightly, I'll just tap it, and there it was! So how many times since then, I'm walking on the beach and I say, now look, it can't be that complicated. And I'll tap it, tap it, nothing happens.

Feynman tried to figure out what conditions lead to insights, but he "never found any correlations with anything."

P vs. NP

A pessimistic take would be that there's basically no way to increase the probability of insights. Recognizing insights as obvious in retrospect is easy, but coming up with them is hard, and this is a fundamental mathematical fact about reality because P != NP (probably). As Scott Aaronson writes:

If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps," no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. It’s possible to put the point in Darwinian terms: if this is the sort of universe we inhabited, why wouldn’t we already have evolved to take advantage of it?

I'm not quite so pessimistic. I agree with Scott Aaronson's basic argument that solving problems is much harder than recognizing good solutions, but there might still be ways we could make it easier to solve problems.

johnswentworth on problems we don't understand

The concept of sudden-insight problems relates to johnswentworth's concept of problems we don't understand [LW · GW]. Problems we don't understand almost always require sudden insights, but problems that require sudden insights might be problems we understand (for example, proving theorems). johnswentworth proposes some types of learning that could help:

  • Learn the gears of a system, so you can later tackle problems involving the system which are unlike any you've seen before. Ex.: physiology classes for doctors.
  • Learn how to think about a system at a high level, e.g. enough to do Fermi estimates or identify key bottlenecks relevant to some design problem. Ex.: intro-level fluid mechanics.
  • Uncover unknown unknowns, like pitfalls which you wouldn't have thought to check for, tools you wouldn't have known existed, or problems you didn't know were tractable/intractable. Ex.: intro-level statistics, or any course covering NP-completeness.

I would expect these types of learning to increase the rate of insights.

Learning how to increase the frequency of insights

Insights happen less frequently under bad conditions: when you're sleep-deprived, or malnourished, or stressed out, or distracted by other problems. Some actions can increase the probability of insights—for example, by studying the field and getting a good understanding of similar problems. But even under ideal conditions, insights are rare.

Interestingly, most of the things that increase the frequency of insights, such as sleep and caffeine, also increase the speed at which you can do incremental work. It's possible that these things speed up thinking, but don't increase the probability that any particular thought is the "right" one.

I can come up with one exception: you can (probably?) increase the frequency of insights on a problem if you understand a wide variety of problems and concepts. I don't believe this does much to speed up incremental work, but it does make sudden insights more likely. Perhaps this happens because sudden insights often come from connecting two seemingly-unrelated ideas. I've heard some people recommend studying two disparate fields because you can use your knowledge of one field to bring a unique perspective to the other one.

Overall, though, it seems to me that we as a society basically have no idea how to increase insights' frequency beyond a basic low level.

Instead of directly asking how to produce insights, we can ask how to learn how to produce insights. If we wanted to learn more about what conditions produce insights, how might we do that? Could we formally study the conditions under which geniuses come up with genius ideas?

If someone gave me a pile of money and asked me to figure out what conditions best promote insights, what would I do? I might start by recruiting a bunch of mathematicians and scientists to regularly report on their conditions along a bunch of axes: how long they slept, their stress level, etc. (I'd probably want to figure out some axes worth studying that we don't already know much about, since we know that conditions (like sleep quality) do affect cognitive capacity.) Also have them report whenever they make some sort of breakthrough. If we collect enough high-quality data, we should be able to figure out what conditions work best, and disambiguate between factors that help provide insights and factors that "merely" increase cognitive capacity.

I'm mostly just speculating here—I'm not sure the best way to study how to have insights. But it does seem like an important thing to know, and right now we understand very little about it.


  1. Some more specific examples from things I've worked on:

    ↩︎

10 comments

Comments sorted by top scores.

comment by gwern · 2021-04-20T14:49:21.273Z · LW(p) · GW(p)

Have you looked at the "incubation effect"?

Replies from: eigen
comment by eigen · 2021-04-20T22:23:27.347Z · LW(p) · GW(p)

A talk given by Rogen Penrose is apt here: The Problem of Modelling the Mathematical Mind. He tries to define how the mind of a sufficiently good mathematician may work with emphasis on parallelization of mathematical solutions. And an interesting book may be The Mathematician's Mind: The Psychology of Invention in the Mathematical Field by Jacques Hadamard.

comment by Gordon Seidoh Worley (gworley) · 2021-04-21T23:17:54.559Z · LW(p) · GW(p)

Two ideas to share here.

Meditation [LW · GW] is used to gain insight into the nature of reality; this is essentially what meditation is being used for in Buddhist practices. The mechanism by which this works isn't very clear, but it seems to be something like "gather a bunch of evidence about the world and wait for it to mound up so high that it collapses whatever categories you had before so you find new ones".

Related, I think meditation generalizes as a form of neural annealing [LW · GW], which is basically applying the annealing analogy to a process that can happen in the brain where the brain can reconfigure itself into a better state by cycling through a high energy state before returning to a lower energy state (or something like that). Lots of details not worked out here, but the link is the most up to date thing I have on this to share.

comment by Pattern · 2021-04-21T17:22:08.045Z · LW(p) · GW(p)
If someone gave me a pile of money and asked me to figure out what conditions best promote insights, what would I do? I might start by recruiting a bunch of mathematicians and scientists to regularly report on their conditions along a bunch of axes: how long they slept, their stress level, etc. (I'd probably want to figure out some axes worth studying that we don't already know much about, since we know that conditions (like sleep quality) do affect cognitive capacity.) Also have them report whenever they make some sort of breakthrough. If we collect enough high-quality data, we should be able to figure out what conditions work best, and disambiguate between factors that help provide insights and factors that "merely" increase cognitive capacity.

If you want high quality data, maybe start with video?

comment by Slider · 2021-04-19T23:24:17.977Z · LW(p) · GW(p)

In the Meaning Crisis that a bookclub going on John Vervake claims that shamanic traditions use sleep deprivation to generate insights and that machine learning generates adaptive models by randomly chucking nodes away from models.

I think you are interested in "great results" and he is talking about "huge reintrepretations".

On eof the examples usded was that there was a puzzle given to a mathematician to solve. have a chess board and remove opposite corner squares. Can this be covered by dominos with no overlap or gaps? He argues that because the mathematician wasconfident in their math skills they worked on many differnt attempts to cover it. However there is a simple way of posing the problem that makes it solve pretty easy. A "good mathematician" would probably spend a lot of time whithin a snigle frame and would be actuallly be a bad source for insights.

Replies from: eigen, romeostevensit
comment by eigen · 2021-04-20T22:18:34.868Z · LW(p) · GW(p)

Mathematician Richard Borcherds said in an interview that he does not have a great memory, that this allows him to come back to a mathematical problem and try solving it in a different way than he did before (because he does not remember how he solved it.)

comment by romeostevensit · 2021-04-20T00:24:38.324Z · LW(p) · GW(p)

There's also the japanese inventor who would borderline drown himself for ideas.

comment by weft · 2021-04-20T09:40:09.391Z · LW(p) · GW(p)

You can simplify the problem into straight behaviorism.... I'd have to look up which book I read this in (Don't Shoot the Dog, maybe?), but there is a game you can teach dogs, dolphins, etc where you give them a box or something, and only reward them for novel behavior. So you reward them the first time they push it with their nose, but not any subsequent times. This seems to "teach" creativity, in that animals that play this game regularly get good at quickly coming up with unusual actions. 

Note: I'm not saying the CORRECT thing to do is ignore all the substeps, conditions, pre-requisites, etc and go straight to "just reward the thing you want". It was just a cute anecdote that seemed relevant.

Replies from: ChristianKl
comment by ChristianKl · 2021-04-20T09:47:39.257Z · LW(p) · GW(p)

While this might produce novel behavior, I doubt that it trains insightful behavior. The heuristics for doing something novel are likely not the same as the heuristics for actual integrating different ideas into insights. 

comment by Olomana · 2021-04-20T07:45:57.535Z · LW(p) · GW(p)

When I solve a sudoku, I typically make quick, incremental progress, then I get "stuck" for a while, then there is an insight, then I make quick, incremental progress until I finish.  Not that there is anything profound about sudokus, but something like this might provide a controlled environment for studying insights.  http://websudoku.com/ provides an endless supply of classic sudokus in 4 levels of difficulty.  My experience is that the "Evil" level is consistently difficult.  I have noticed that my being tired or distracted is enough to make one of these unsolvable.

You also discussed cross-discipline insights.  There are sudoku variants, such as sudokus with knight's-move constraints.  Here my experience is that having recently worked on a sudoku variant tends to interfere with solving a classic sudoku.  I also solve the occasional chess problem, but have not noticed any interaction with sudokus.