Why are some problems Super Hard?

post by Gabriel Alfour (gabriel-alfour-1) · 2022-08-24T17:58:41.031Z · LW · GW · 1 comment

This is a question post.

Contents

  The problem
  Preemption #1
    "Many smart researchers have tried solving them, and failed, and made no progress. This is why it is obvious that those problems are very hard."
  Preemption #2
    "There are typical barriers to P vs NP described here (and many approaches in Scott Aaronson's survey) or to the Collatz's Conjecture"
  Preemption #3
    "For any given difficult problem, you can walk up to an expert and ask them why it's considered hard, but the answers they give you won't have any unifying theme aside from that. It's all ad hoc."
  Edits
None
  Answers
    9 Thane Ruthenis
    7 shminux
    6 niknoble
    6 johnswentworth
    5 Yair Halberstadt
    3 Dagon
    2 JBlack
    1 Dennis Towne
    0 tailcalled
    -1 Ilio
None
1 comment

The problem

I have been wondering a lot about a problem lately, that I believe has far reaching consequences.

Some context: There are many theoretical problems that are considered to be obviously far far outside our problem solving ability. Two examples of those would be the Collatz Conjecture and P vs NP.
My reasoning: If that's the case, for each of those problems, there should be a strong knock-out argument for *why* they are intrinsically hard to solve.
My problem: I don't know of any such argument.

If anyone knows any such knock-down argument for why those problems are so hard, I'd be very interested.

Preemption #1

"Many smart researchers have tried solving them, and failed, and made no progress. This is why it is obvious that those problems are very hard."

I understand this, but this does not help. It does not lead to a causal model for why those problems are hard. Let alone causal, it is not even a straightforward predictive model: to which extent people having failed at it inform that they will keep failing? You need some kind of model or informed prior to talk about that.

Even worse, it might entirely be tautological, where the definition of "hard" is something like "the amount to which we have tried to solve the problem and failed".

 

It also fails to address the source of my confusion. When I try to describe it more precisely, it looks like "Given that so many geniuses have tried and failed, why don't we have a solid explanation for why it doesn't work?"
The shape of what I am looking for is something like a deep principle, a fundamental reason, or just a simple razor that explains how those problems fall outside the scope of what we know to solve

Phrased differently: it should be possible to define an over-approximation of what we can prove right now, and then show that those problems are not within this.
The more "far outside our problem solving ability" they are, the easier it should be to define this over-approximation and a razor to separate the problem from it.

Preemption #2

"There are typical barriers to P vs NP described here (and many approaches in Scott Aaronson's survey) or to the Collatz's Conjecture"

I understand this, but after reading them, I am still confused. They are more along the line of "We have tried specific proof techniques and failed", more than "they are super intractable".
If you don't make progress on a problem, a natural next best thing is usually to make progress on the meta-problem of "Why haven't I made progress?", in order to reflect about what went wrong.

As such, I have some meta-confusion that goes like "Is that question actually uninteresting at all, so much so that people do not study it? Or is it more that this question is also too hard to make any legible progress?"

Preemption #3

"For any given difficult problem, you can walk up to an expert and ask them why it's considered hard, but the answers they give you won't have any unifying theme aside from that. It's all ad hoc."

Indeed, I expect that for each of those problems, there is some unifying reason that makes them hard. Or at most, a couple of general ones.

Experts from many different fields of Maths and CS have tried to tackle the Collatz' Conjecture and the P vs NP problem. Most of them agree that those problems are way beyond what they set out to prove. And I mostly agree that each expert's intuition vaguely tracks one specific dimension of the problem.

But any simplicity prior tells you that it is more likely for there to be a general reason for why those problems are hard along all those dimensions, rather than a whole bunch of ad-hoc reasons.

Restated:

  1. Many different approaches have been tried. In the case where only a couple of specific approaches have been tried, I expect the reason for why it hasn't been solved to be ad-hoc and related to the specific approaches that have been tried. The more approaches are tried, the more I expect a general reason that applies to all those approaches.
  2. Those problems are simple. In the case of a complicated problem, I would expect the reason for why it hasn't been solved to be ad-hoc. I expect this much less in the case of simple problems.

Edits

Answers

answer by Thane Ruthenis · 2022-08-24T19:18:52.712Z · LW(p) · GW(p)

Why are some problems Super Hard?

Because there's a long chain of undiscovered abstractions between the abstractions we have now and the abstractions needed to solve such problems. Or, equivalently, writing down/conceptualizing the solution in terms of extant abstractions would result in a solution too long to fit in a human's working memory.

To elaborate: At any given time, we have a number of extant conceptual tools/building blocks. Proven math theorems, established mathematical frameworks, discovered laws of physics, other ideas. We can put these conceptual blocks together to create new tools/blocks, like defining a new mathematical function in terms of some other functions. We then abstract [? · GW] over the result: learn to think of the new function as its own thing that has its own rules/properties, without explicitly thinking about its definition. We chunk our understanding.

Human working memory is limited. We can effectively consider situations only of limited mental complexity, can fit only so much under our mind's eye. Past a certain point, we have to factorize, break the problem down into sub-problems.

Chunking is part of that. If solving some problem requires putting together 20 conceptual building blocks, we can divide them into five groups, abstract over every four-piece group, and then reason about the five higher-level building blocks. Each of these blocks would be "equivalent" to the four blocks they're made of, in the context of this specific problem, but their mental complexity will have been reduced to the higher-level summary of the four-block system.

As such, for any given pair of , there's some length  that denotes the shortest solution to the research problem expressed in terms of the extant conceptual tools. If  is larger than a human's working-memory capacity , then the problem would need to be broken down into sub-problems before it can be solved. Specifically, it'd need to be broken down  times.

And if that value is very large, the problem is Super Hard: we'd need to do a lot of conceptual engineering before we can reason out the solution. Build a "conceptual bridge" to it.

Analogies:

  • Manufacturing. To build the most advanced of the modern technologies, there's a long line of tools-building-tools-building-....-tools-building-tools, if we're starting from the Stone Age tech.
  • Programming. It's often advised to break down long functions into sub-functions of less than five or less than ten lines, otherwise they're pretty hard to think about and bugfix.
comment by johnswentworth · 2022-08-24T21:21:08.914Z · LW(p) · GW(p)

Any reason to expect that P vs NP or the Collatz Conjecture in particular have unusually long undiscovered abstraction-chains? (Other than nobody having solved them already.) What's the feature of these particular problems which makes them particularly difficult?

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2022-08-24T21:44:58.980Z · LW(p) · GW(p)

My instinctive answer is that no, there's no special property, no neat compact reason. It just so happens that these specific pairings of  have a very large . If we were a different or alien culture, that has historically pursued different branches of math and started from different native concepts, these problems might've been trivial, and some of our trivial problems might've been tremendously difficult.

I guess "what native concepts we start with" and therefore "what math branches we historically pursue" might not be entirely arbitrary, conditioning on the fact that we're evolved life. There might even be some generalities across most probable instances of naturally-evolved minds. But that's the only source of non-arbitrariness I see here.

A starker example is the Fermat's Last Theorem. There's really no reason I can see why it "had" to have a proof over 100 pages long. (Also, this is why I think Fermat was full of it when he claimed to have solved it. I'm sure mathematicians following him have cycled through all possible math-tools of his time and their neighborhoods, so unless he built a really long chain of math results in his private time and then published none of it, he couldn't have done it.)

comment by TAG · 2022-08-24T19:50:12.716Z · LW(p) · GW(p)

Because there’s a long chain of undiscovered abstractions between the abstractions we have now and the abstractions needed to solve such problems

If there's a guaranteed sequential , non acyclic, stack of abstractions, with just a few missing, you're playing on easy mode.

Philosophical problems are hard because of circularities...we don't know which of logic, epistemology, ontology, etc, is for fundamental....and can't figure it out without establishing a starting point.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2022-08-24T20:11:28.468Z · LW(p) · GW(p)

Well, yes, if you can't conceptualize the solution because it's too large, you don't know what building blocks go into it, and you don't even necessarily know in which direction to build the bridge/what frameworks to develop/what sub-problems to solve. Indeed, that's probably where most of the difficulty lies, especially in pre-paradigmic fields.

It's even more stark if we view it in terms of individual researchers, and not the entire civilization. Any given human has access to only a subset of the conceptual toolbox of the civilization, and if some problem in Domain A can be solved by a tool from a distinct Domain B, it might be a long time before a specialist with knowledge of both comes around; or alternatively, before Domain A people re-invent the relevant tool already present in Domain B.

Still, this sort of cross-domain bleed-through is more of a "low-hanging fruit", in the sense that if a problem is very famous, most of our civilization's tools have already been tried on it, in every combination that makes sense. The bottleneck then is genuine conceptual engineering.

Replies from: TAG
comment by TAG · 2022-08-24T20:23:56.687Z · LW(p) · GW(p)

Well, yes, if you can’t conceptualize the solution because it’s too large,

It might be to big for a brain, but it also might be inherently circular. (A closed loop can be small) How would you know which? What guarantee do you have that it's a stack of abstractions?

Remember, all forms of engineering are easy mode compare to science, which is easy compared to philosophy.

The bottleneck then is genuine conceptual engineering.

Unless it's circularity.

comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T07:11:20.560Z · LW(p) · GW(p)

I agree this establishes that the Collatz' and P vs NP Conjectures have longer chain length than everything that has been tried yet. But this looks to me like a restatement of "They are unsolved yet".

Namely, this does not establish any cause for them being much harder than other merely unsolved yet problems. I do not see how your model predicts that the Collatz' and P vs NP Conjectures are much harder than other theorems in their fields that have been proved in the last 15 years, which I believe other experts have expected.

Put differently, the way I understand it, your model explains things post-hoc (ie, "if some problems are not solved, then, they must have had long chains"), but does not make predictions (how long do you expect their chains to be? how does this translate in terms of difficulty?).

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2022-08-25T09:33:11.642Z · LW(p) · GW(p)

I don't think this can be predicted [LW(p) · GW(p)].

Replies from: gabriel-alfour-1
comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T09:45:55.311Z · LW(p) · GW(p)

The post is about predictions made by experts in number theory and complexity theory.

If you think that this can not be predicted, and that they are thus wrong about their predictions, I would be interested in knowing why.

Namely:

  • Do you have mechanistic / gear-level / inside view reasons for why the difficulty of problems can not be predicted ahead of time, where you disagree with those experts?
  • Do you have empirical / outside view reasons for why those experts are badly calibrated?
Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2022-08-25T11:26:57.558Z · LW(p) · GW(p)

The post is about predictions made by experts in number theory and complexity theory.

Isn't it about empirical evidence that these problems are hard, not "predictions"? They're considered hard because many people have tried to solve them for a long time and failed, not because experts glanced at them once and knew on priors they'd be legendarily difficult.

Aside from that, an expert can estimate how hard a problem is by eyeballing how distant the abstractions needed to solve it feel from the known ones — whether we have almost the right tools for solving it, or have no idea how the right tools would look at all. They're able to do this because they've developed strong intuitions for their professional domain: they roughly know what's possible, what's on the edge of the possible, and what's very much not. And even then, such intuitions are often very wrong, see Fermat's Last Theorem.

But there's no objective property that makes these problems intrinsically hard, only subjectively hard from the point of view of our conceptual toolbox.

Replies from: gabriel-alfour-1
comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T13:54:03.383Z · LW(p) · GW(p)

Isn't it about empirical evidence that these problems are hard, not "predictions"? They're considered hard because many people have tried to solve them for a long time and failed.

No, this is Preemption 1 in the Original Post.

"hard" doesn't mean "people have tried and failed", and you can only witness the latter after the fact. If you prefer, even if have empirical evidence for the problem being "level n hard" (people have tried up to level n), you;d still do not have empirical evidence for the problem being "level n+1 hard" (you'd need people to try more to state that if there's nothing you can say about it ahead of time). Ie, no predictive power.

An expert can estimate how hard a problem is by eyeballing how distant the abstractions needed to solve it feel from the known ones

Great! We're getting closer to what I care about.

Then what I am saying is that there is a heuristic that the experts are using to eyeball this, and I want to know what that is, start ingwith those 2 conjectures!

I am also saying that the more distant "the abstraction need to solve it feel from the known ones", the easier it should be to do so.

They're able to do this because they've developed strong intuitions for their professional domain: they roughly know what's possible, what's on the edge of the possible, and what's very much not.

Exactly, but those intuitions are implemented somehow. How?

Also, the more experts agree on a judgment, and the stronger their judgment, the easier you expect to be to explain that intuition.


But there's no objective property that makes these problems intrinsically hard, only subjectively hard from the point of view of our conceptual toolbox.

I was confused very confused when I read this. For instance, the part in bold is already reflected in the Original Post. 

There are many theoretical problems that are considered to be obviously far far outside our problem solving ability.

If you prefer, interpret "intrinsically hard" as "having an intrinsic property that makes it subjectively hard for us. To model how that would look, consider the following setup:

The space of problems is just the real plane, and our ability to solve problems is modeled by a unit disk in the plane (if in the disk, solved, and the closer it is to the disk, the easier it is to solve). Then, the difficulty of a problem is subjective, it depends on where the disk is.

But let's say the disk is somewhere on the x axis, then the intrinsic property of a problem being far having a high y coordinate, makes it subjectively hard.


I'll make some edits to the post. I thought most of this was clear because of Preemption #1, but it was not.

Replies from: ricraz, Thane Ruthenis
comment by Richard_Ngo (ricraz) · 2022-09-14T20:16:34.862Z · LW(p) · GW(p)

even if have empirical evidence for the problem being "level n hard" (people have tried up to level n), you;d still do not have empirical evidence for the problem being "level n+1 hard"

This is implicitly assuming that our expectation of how long a problem should take to solve is memoryless. But a breakthrough is much more likely on the 1st day of working on a problem than on the 1000th day. More generally, if problems vary greatly in difficulty, then our failure to solve a given problem provides evidence that it's one of the harder problems. So a more reasonable prior in this case is something like logarithmic - e.g. it's equally likely that a problem takes 1-10 days, or 10-100 days, or 100-1000 days, etc, to solve.

A similar model can give rise to the Lindy effect, where the expected lifetime is proportional to the lifetime so far. (In this case it'd be the expected time to solving the problem which would be proportional to the time which the problem has been open.)

comment by Thane Ruthenis · 2022-08-25T15:07:36.022Z · LW(p) · GW(p)

even if have empirical evidence for the problem being "level n hard" (people have tried up to level n), you;d still do not have empirical evidence for the problem being "level n+1 hard" (you'd need people to try more to state that if there's nothing you can say about it ahead of time). Ie, no predictive power.

Suppose I take out a coin and flip it 100 times in front of your eyes, and it lands heads every time. Will you have no ability to predict how it lands the next 30 times? Will you need some special domain knowledge of coin aerodynamics to predict this?

Then what I am saying is that there is a heuristic that the experts are using to eyeball this, and I want to know what that is, start ingwith those 2 conjectures!

I mean... That heuristic is that heuristic? "Experts have a precise model of the known subset of the concept-space of their domain,  and they can make vague high-level extrapolations on how that domain looks outside the known subset, and where in the wider domain various unsolved problems are located, and how distant they are from the known domain". The way I see it, that's it. This statement isn't reducible to something more neat and simple. For any given difficult problem, you can walk up to an expert and ask them why it's considered hard, but the answers they give you won't have any unifying theme aside from that. It's all ad hoc.

Why would you think there's something else? What shape do you want the answer to have?

Replies from: gabriel-alfour-1
comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T17:26:41.920Z · LW(p) · GW(p)

Suppose I take out a coin and flip it 100 times in front of your eyes, and it lands heads every time. Will you have no ability to predict how it lands the next 30 times? Will you need some special domain knowledge of coin aerodynamics to predict this?

  • Coin = problem
  • Flipping head = not being solved
  • Flipping tail = being solved
  • More flips = more time passing

Then, yes. Because you had many other coins that had started flipping tail at some point, and there is no easily discernable pattern.

By your interpretation, the Solomonoff induced prior for that coin is basically "it will never flip tail". Whereas, you do expect that most problems that have not been solved now will be solved at some point, which does mean that you are incorporating more knowledge.

"Experts have a precise model of the known subset of the concept-space of their domain,  and they can make vague high-level extrapolations on how that domain looks outside the known subset, and where in the wider domain various unsolved problems are located, and how distant they are from the known domain"

Experts from many different fields of Maths and CS have tried to tackle the Collatz' Conjecture and the P vs NP problem. Most of them agree that those problems are way beyond what they set out to prove. I mostly agree with you on the fact that each expert's intuition vaguely tracks one specific dimension of the problem.
But any simplicity prior tells you that it is more likely for there to be a general reason for why those problems are hard along all those dimensions, rather than a whole bunch of ad-hoc reasons.

The way I see it, that's it. This statement isn't reducible to something more neat and simple. 

For any given difficult problem, you can walk up to an expert and ask them why it's considered hard, but the answers they give you won't have any unifying theme aside from that. It's all ad hoc.

What makes you think that? I see you repeating this, but I don't see why that would be the case.

Why would you think there's something else?

Good question, thanks! I tried to hint at this in the Original Post, but I think I should have been more explicit. I will make a second edit that incorporates the following.

The first reason is that many different approaches have been tried. In the case where only a couple of specific approaches have been tried, I expect the reason for why it hasn't been solved to be ad-hoc and related to the specific approaches that have been tried. The more approaches are tried, the more I expect a general reason that applies to all those approaches.

The second reason is that the problems are simple. In the case of a complicated problem, I would expect the reason for why it hasn't been solved to be ad-hoc. I have much less of this expectation for simple problems.

comment by TAG · 2022-08-24T19:53:14.297Z · LW(p) · GW(p)
answer by Shmi (shminux) · 2022-08-25T02:34:12.691Z · LW(p) · GW(p)

A better approach to answering why a specific problem is super-hard given a certain level of knowledge is doing a post-mortem on hard-but-solved problems. A few random examples:

  • Fermat's Last Theorem
  • Poincare Conjecture
  • Trisecting an angle
  • Non-Euclidean geometry (search for a proof of Euclid's 5th postulate)
  • Calculation of planetary motion (from epicycles to Newton's laws)

One could ask the following questions, one easy and one hard:

  • What made these problems easy to state by hard to solve? 
  • Given the level of knowledge at the time the problem was stated, what contemporary signs of "hardness" one could have noticed?

Once/if you answer these, and figure out some patterns of hardness, you can then think about yet-unsolved problems and see if you can spot the same signs. If you are lucky, it might also give you promising directions for fruitful research.

Edit: it may well be that there is no recognizable pattern, some peaks are taller than others but all are hidden in the clouds and the only way to find out is through the hard work of climbing them.

comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T07:44:21.055Z · LW(p) · GW(p)

Interesting.

A nice way to do such a post-mortem would be to actually ask the people who were there if they thought the problem was Super Hard, why so, and how they did update after the solution was found.

Thanks!

answer by niknoble · 2022-08-25T18:54:52.873Z · LW(p) · GW(p)

Consider this problem: Are there are an infinite number of 9s in the digits of pi? The answer is obviously yes. The only way it could be no is if pi interacts in some strange way with base-10 representations, and pi has nothing to do with base-10.

But how do you prove that no such interaction exists? You have to rule out an endless number of possible interactions, and even then there could be some grand conspiracy between pi and base-10, hiding in the places you haven't yet looked.

Proving the absense of an interaction between two areas of math is much harder than proving its presence. If you want to prove presence, you can just find the interaction and explain it. But you can't "find an absence."

Most of the hard math problems turn out to have this issue at their core. If you dig into Collatz, you find that it's very likely to be true. The only way it could be false is if there's an undiscovered conspiracy between parities of integers and the collatz map. How to prove there is no conspiracy?

comment by DragonGod · 2022-09-14T19:58:42.932Z · LW(p) · GW(p)

Aren't proofs by contradiction the standard technique for proving the absence of a property.

You can prove ¬P by proving that P => FALSE?

answer by johnswentworth · 2022-08-24T22:31:05.898Z · LW(p) · GW(p)

My intuition is that both P vs NP and Collatz are hard for a similar reason: they're asking about arbitrary problems, without any interesting special structure.

The vast majority of our math is developed to handle problems with some special structure - symmetry, linearity, locality, etc. P vs NP is asking "ok, but how hard is it to solve arbitrary problems without any special structure, other than the ability to check a solution efficiently?". And Collatz is just a random-ass problem which doesn't seem to have any special structure to it.

If we want to make progress on arbitrary problems without any special structure, then we need insights into problem-solving in general, not just the myriad of special cases which we usually think about. Our insights need to be maximally generalizable, in some sense. In the case of Collatz, there might exist some special trick for it, but it's not any of the tricks we know. In the case of P vs NP, the problem itself is directly about very general problem-solving.

comment by Thane Ruthenis · 2022-08-25T00:32:40.852Z · LW(p) · GW(p)

I disagree. Any given arbitrary problem is an instance of some broader set of problems, such that knowing how to solve that arbitrary problem allows you to trivially solve any other problem in that broader set. Conversely, if you can't trivially solve it, that means you don't understand the entirety of some fairly broad segment of concept-space.

In that sense, there's no problems "without any interesting special structure" that are hard to solve relative to some conceptual toolbox. If you can't trivially solve it, there are interesting structures to be uncovered on the way to solving it. ("Triviality" is a variable here, too. The more non-trivial solving a problem is, the higher the distance between your tools and the tools needed to solve it; the more ignorant you are, and so the more interesting structures you can uncover on the way to solving it.)

NP-complete problems are... I wanted to say "the exception", but they're not, really. They're not conceptually hard to solve (unless it turns out P=NP after all, but surely not), just computationally so.

Edit: Again, Fermat's Last Theorem is a good example. It's the most random-ass problem among random-ass problems, but the solution to it required developing some broadly applicable tools.

comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T07:32:32.767Z · LW(p) · GW(p)

And Collatz is just a random-ass problem which doesn't seem to have any special structure to it.

The Collatz Conjecture has a lot of structure to it:

  • It is false for 3n-1
  • It is undecidable for generalizations of it
  • It is true empirically, up to 10^20
  • It is true statistically, a result of Terrence Tao establishes that "almost all initial values of n eventually iterate to less than log(log(log(log(n))))" (or inverse Ackermann)

In the case of Collatz, there might exist some special trick for it, but it's not any of the tricks we know.

I am not sure what you counts or doesn't count as a trick. Is it only specific proof techniques, or entire fields (eg, number theory) too?

P vs NP is asking "ok, but how hard is it to solve arbitrary problems without any special structure, other than the ability to check a solution efficiently?"

Isn't this true of most already proven theorems in complexity theory too? Among other things:

  • We can prove the separation of P vs EXPTIME
  • We can not prove the separation of P vs PSPACE (PSPACE! Never mind NP)
  • We can prove that PSPACE = APTIME = IP = QIP

If we want to make progress on arbitrary problems without any special structure, then we need insights into problem-solving in general, not just the myriad of special cases which we usually think about.

While I do not think this actually applies to P vs NP or the Collatz Conjecture, I think it actually applies to my meta-question, which is something like "Why has no one actually proven that those 2 problems are Super Hard? Or not even proven it, just gave strong heuristic arguments for why they should be Super Hard".

Sounds reasonable that to answer such a question, even if those problems do have some special structure, you need insights into problem-solving in general.

answer by Yair Halberstadt · 2022-08-26T14:41:58.653Z · LW(p) · GW(p)

With regards to P Vs NP, I think it's relevant that the vast majority of interesting questions in complexity theory are open, despite this being an area which has received a lot of attention in the last 50 years. This suggests that it's hard to solve this problem because we haven't come up with good techniques in this field - it's like trying to climb Everest using 15th century technology.

answer by Dagon · 2022-08-25T00:08:08.603Z · LW(p) · GW(p)

There is a class of super-hard problems (AI alignment, a lot of social change) which are hard because they're adversarial (at least partly).  If different agents value different results, there can be no single preferred outcome (there may be a negotiated agreeable outcome, or there may not, but it won't be "best" for everyone).

I don't think that's what you're talking about though.  I think part of the explanation is that we don't have a model for distance from success.  We have no clue if the researchers who've made serious attempts on these problems got us closer to an answer/proof, or if they just spun their wheels.  In other words, these problems are hard because we haven't found an incremental way to solve them.

Note that this is related to the reasons that colonizing Mars or mining asteroids is hard - there's a lot of problems that need to be solved, many of which we don't know a feasible/economical approach to, and as long as any of them is unsolved, the end-result remains impossible.  Also similarly, there are discoveries about the sub-problems that are valuable in themselves, even if they don't get us to the stated goal.

comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T09:49:21.847Z · LW(p) · GW(p)

I think part of the explanation is that we don't have a model for distance from success.  We have no clue if the researchers who've made serious attempts on these problems got us closer to an answer/proof, or if they just spun their wheels. 

This post is about experts in the fields of number theory and complexity theory claiming to have a clue about this. 
If you think "We have no clue", you likely think they are wrong, and I would be interested in knowing why.

I added more details on this comment [LW(p) · GW(p)], given that someone else already shared a similar thought

answer by JBlack · 2022-08-25T07:36:33.711Z · LW(p) · GW(p)

For those two problems in particular, there are good reasons to expect them to be difficult to solve.

There are lots of Collatz-like problems that are formally undecidable, in the sense that for any formal system there exists a similar iteration problem where the formal system cannot prove the behaviour one way or the other. It is plausible that the actual Collatz system is one of these for our standard proof systems, and that the answer depends upon what we actually mean by natural numbers in some stronger sense.

P vs NP is another candidate for an undecidable problem, dealing as it does with general behaviour of Turing machines that can run programs with rather weak bounds. There's a lot that we can't yet prove about about general computation systems, and we have theorems that say we should expect there to be a lot that we can't ever prove. It would be unsurprising if quite a lot of the problems we can't solve after trying very hard are actually not solvable.

comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T09:41:28.287Z · LW(p) · GW(p)

It is plausible that the actual Collatz system is one of these for our standard proof systems.

Why? Consider the following:

The Collatz Conjecture has a lot of structure to it:

  • It is false for 3n-1
  • It is undecidable for generalizations of it
  • It is true empirically, up to 10^20
  • It is true statistically, a result of Terrence Tao establishes that "almost all initial values of n eventually iterate to less than log(log(log(log(n))))" (or inverse Ackermann)

Additionally, if you look at the undecidable generalizations of the Collatz Conjecture, I expect that you will find them much stronger than the base system. And when people prove such results, they look for the weakest system such that you still have undecidability.

As a result of those considerations, I find it quite implausible that the Collatz Conjecture is undecidable, and I would be interested in what makes you think otherwise.


P vs NP is another candidate for an undecidable problem, dealing as it does with general behaviour of Turing machines that can run programs with rather weak bounds. There's a lot that we can't yet prove about about general computation systems, and we have theorems that say we should expect there to be a lot that we can't ever prove.

My answer is similar here to Scott Aaronson's:

More to the point, if you believe P vs. NP is undecidable, then you need to answer the question: why does whatever intuition tells you that, not also tell you that the P versus EXP, NL versus PSPACE, MAEXP versus P/poly, TC0 versus AC0, and NEXP versus ACC questions are similarly undecidable? (In case you don’t know, those are five pairs of complexity classes that have been proved different from each other, sometimes using very sophisticated ideas.)

 

It would be unsurprising if quite a lot of the problems we can't solve after trying very hard are actually not solvable.

I am not sure if you mean "unsurprising" in the literal or metaphorical way.

If you mean that it would literally be merely unsurprising, I agree, this is in the realm of possibilities. But I would not call this a "good reason to expect it to be difficult to solve".

If you mean it as very likely, as to make it a good reason, I disagree. I expect that if we went through Wikipedia's list of mathematical Conjectures solved since 1995 (which I believe is a good approximation of "Problems that were really hard, and we get to find out how they turned out"), we'd find that most of them were actually resolved, rather than found as undecidable.

answer by Dennis Towne · 2022-08-24T21:28:01.364Z · LW(p) · GW(p)

There are a lot of Super Hard problems where we do know why they are hard to solve.  Quite a few of them in fact:

 - How can we cure cancer?

 - How can we maintain human biological hardware indefinitely?

 - How can we build a human traversible wormhole?

 - How can we build a dyson sphere?

 - How can societies escape inadequate equilibria?

Are these perhaps boring, because the difficulty is well understood?

Would it be worthwhile to enumerate the various classes of Super Hard problems, to see if there are commonalities between them?

comment by Gabriel Alfour (gabriel-alfour-1) · 2022-08-25T09:52:42.406Z · LW(p) · GW(p)

Are these perhaps boring, because the difficulty is well understood?

They are not boring, I am simply asking about some specific cluster of problems, and none of them belong to that cluster.

Replies from: dennis-towne
answer by tailcalled · 2022-08-24T19:34:12.133Z · LW(p) · GW(p)

One alternative to problems being super hard would be if we had some algorithm running in some reasonable amount of time, say, polynomial time, which could solve them. But proving mathematical statements is NP-complete, so such an algorithm would show that P=NP, while a proof that such an algorithm cannot exist would show that P != NP.

comment by Yair Halberstadt (yair-halberstadt) · 2022-08-25T04:13:33.253Z · LW(p) · GW(p)

Why is proving mathematical statements NP-complete? There is no guarantee that a polynomial length proof of a true mathematical statement of length N exists.

Replies from: tailcalled
comment by tailcalled · 2022-08-25T07:13:00.751Z · LW(p) · GW(p)

Right, so there are technical conditions such as this that apply; finding proofs of bounded length where the bound is given in unary is NP complete. Otherwise if arbitrary-length proof count, it's halting-complete.

answer by Ilio · 2022-08-25T03:18:27.740Z · LW(p) · GW(p)

Here’s one candidate reason for P vs NP: the hard instances of any NPC problem are often the same as the hard instances of any other NPC problem, including a (yet to be formalized) problem that will turn equivalent to proving P vs NP. Then, it’s hard to prove almost by definition.

1 comment

Comments sorted by top scores.

comment by Mitchell_Porter · 2022-08-25T18:32:22.567Z · LW(p) · GW(p)

A related question might be, why do so many people think they proved them?