Computability and Complexity
post by Ege Erdil (ege-erdil), Metaculus · 2022-02-05T14:53:40.398Z · LW · GW · 5 commentsThis is a link post for https://www.metaculus.com/notebooks/9729/computability-and-complexity/
Contents
Busy beavers Bounded halting Conclusion None 5 comments
Perhaps the most difficult problem in computer science is to gain some understanding of what a program does without having to run it. This might seem surprising to you. Why can't we simply look at the source code or some other description of the program and "see" what it does?
If so, then consider the program whose pseudocode is the following: pick some enumeration of all tuples of natural numbers with and . Loop over this enumeration and at each step check if or not. If this condition holds at any step then exit the loop, and if it doesn't then continue. This is a simple program and we might ask a simple question about it: does this program halt or not? In other words, if we run this program on a computer without any time or memory constraints, will the program run forever or will it stop at some point?
You might've noticed that asking this question about the behavior of this simple program is actually equivalent to Fermat's last theorem, which used to be one of the most famous open problems in mathematics before it was finally proven by Andrew Wiles with an extremely complicated argument. The question might therefore appear simple, but in fact even for this simple program it's very difficult to answer.
Asking whether a program with a given source code will halt or not is the subject of the famous halting problem in computer science. The halting problem is often formulated in terms of Turing machines, which are rudimentary computers that nevertheless contain enough complexity to simulate any other computer, but the substance is unchanged if we ask the question instead of any other programming language (throughout this notebook I'll be assuming these languages are Turing complete, which is a technical condition to ensure they are "expressive enough" to do some of the constructions in the notebook). The above example might've motivated why the problem is indeed difficult, and in fact we have a proof that in general it's impossible to solve. The proof is an example of a diagonal argument and goes like this:
Suppose that given a source code and an input , we had a program to compute the function which outputs if the program halts when ran on input and outputs otherwise. We could then define a new program taking as an input: if then returns and otherwise loops forever. If were computable then would be as well, so there would be a program with some source code which computes . Now we have a contradiction, since if then this means and therefore the program doesn't halt when ran on itself as an input, contradicting the fact that does halt on the input ; and if instead loops forever then so that halts when ran on input . Regardless of which possibility is realized, and always disagree on the input , so by contradiction our initial assumption must have been incorrect: there is simply no program which computes the function , and so no program which computes as well.
Busy beavers
There is an interesting implication of the unsolvability of the halting problem. Suppose that given some programming language, we consider all valid programs in that language of a given length which take no input. There are only finitely many such programs, and out of them some of them halt and some of them don't. It's a trivial consequence of the finiteness of the number of these programs that among the ones that do halt, there will be a program that maximizes some metric of how long the program runs for: basic instructions executed on the processor, amount of times the program makes changes to memory, the maximal number of consecutive memory cells which have their values changed from the original blank one, et cetera. We can then define a function which takes to this maximal number. Different choices of criteria and programming languages give different functions , but the spirit of the construction is the same for all of them. Programs which are defined in this way are called busy beavers: they do the "most work possible" for a program of their length given the constraint that they must not run forever.
Now, suppose that we work with any programming language you like and we fix some compiler for it and some processor architecture which is going to run the compiled program, and we define as the maximal number of steps a program of length can take before halting. This is called a busy beaver function, and it's an immediate consequence of the unsolvability of the halting problem that it's uncomputable. Indeed, if we had a program to compute , then given any program-input pair we could decide if it halts or not by simply defining a new program which feeds with the input , and then we could simulate for steps. If halts at all then it halts in at most steps, so if it hasn't halted by then we know it will never halt. We know that in fact there's no program which solves the halting problem, so this argument implies there's also no program which computes . Notice that the argument still works just as well if we could compute some which had the property that for only finitely many , so this proves that there is no computable function which grows faster than .
In fact we can also show that grows faster than any computable function as follows: if is any computable function and a program computing it has length , then we can define a program of length which executes an empty loop times (how?). This implies , and this bound is enough to show that grows faster than .
These conclusions hold in general for all busy beaver functions, but in practice we want to eliminate the ambiguities in the definition coming from all the different choices we have to make, so we often define the busy beaver function or by maximizing the number of shifts/steps taken by a two symbol, -state Turing machine which has an initially empty input tape. If you don't know what a Turing machine is, the specific definition is not all that important; what's important is that it's equivalent to any other "powerful enough computer" and we take it as a standard to make sure we're all talking about the same function .
There's something that might be even more disturbing about . Notice that for any quantity, we have a trivial (though very inefficient) procedure to try to compute it: we can simply pick some system of axioms, such as ZFC set theory, and loop over all proof candidates in this theory. If one such proof is valid and proves that for a natural number, say, then we halt and return ; otherwise we loop forever. We know that in fact is uncomputable, so this algorithm must fail, and the only way it can fail is if for large enough inputs , ZFC set theory cannot prove any statement of the form for a natural number. In this case, we would say that ZFC cannot decide the value of . In first-order logic (or any other complete logic) we have some guarantee that "every true statement is provable", so the fact that no statement of this form can be proven means there's an actual ambiguity about the value of in the theory, whereas in logics which are not complete the theory is simply not strong enough to prove all of its implications. In either case our procedure to compute fails and we discover that in some sense these values are "too big" for their values to be decided by some axiom system such as ZFC. Of course, the same idea would work for an arbitrary theory, not just ZFC.
We can then imagine that formal systems can be stratified by the largest value of the busy beaver function that they can decide. For example, we know that Peano arithmetic, which is a weaker system of axioms than ZFC set theory, can prove and . We also know that at some point Peano arithmetic goes from being able to decide the values of to not being able to decide them. What is this threshold? In other words, what's the smallest for which Peano arithmetic cannot decide the value of ? We don't know, but we know that for both ZFC and Peano arithmetic this is at least and at most - it's been proven that if ZFC is in fact consistent (meaning that no contradiction can be derived from its axioms), it can't decide the value of . The idea of the proof is to simply construct a Turing machine that searches for an inconsistency in ZFC set theory and then make it as small as possible. By Godel's second incompleteness theorem we know if ZFC is consistent then it can't prove its own consistency, so it also can't prove that this Turing machine doesn't halt and hence it can't decide the value of the busy beaver function at the number of states of this machine.
An interesting question is to compare Peano arithmetic to ZFC set theory from the lens of the busy beaver function. Theories which can decide larger values of the busy beaver function are stronger in how much they can prove about ordinary arithmetic, and we know that ZFC is in fact a stronger system of axioms than PA, so the question is how would this be reflected in the values of they can decide? Unfortunately computing the exact threshold of decidability of the busy beaver function for either of these theories is likely an insurmountable challenge, but we can compare the best upper bounds we will be able to find. You can find the two questions about this here and here.
Bounded halting
Now that we've seen the general halting problem is impossible to solve, we can scale back our ambitions and ask for something weaker: instead of deciding whether a given program halts at all, what if we want to decide if it halts in a given number of steps?
This one is of course trivial to solve: we simply take the program and simulate it for the given number of steps to see if it halts or not. To make the question interesting we have to ask if there's a better algorithm than that, one that gives us an understanding of what the program is doing without us having to run the program. We can modify the problem like this to make it nontrivial: given a program of length , decide if it halts in steps for all inputs or not. The "all inputs" condition doesn't actually make the problem hopeless to solve, since if the program halts in steps on a given input it can only read bits of the input (meaning at most some constant independent of times ), so with trial and error we could solve this problem in exponential time: we simply try all possible inputs of length and for each input we simulate the program for steps to see if it halts or not. The exponential time is because there are possible inputs that can be expressed using bits - an exponential function of . This is the bounded halting problem.
The question we could ask is whether the bounded halting problem can be solved efficiently or not. The bounded halting problem is in the complexity class NP: it's a problem that could be solved by a nondeterministic Turing machine in polynomial time. This is because if we're actually given a candidate input, we can check if the program runs for more than steps on that input or not by simply simulating the program for steps on that input, which takes only steps: it's not just polynomial time, but in fact linear time in this case. The difficulty is that if we think of inputs as given to the program in binary, for example, then there are possible inputs we have to check like this.
The famous P = NP problem of computer science is about problems of this nature: if we can check a candidate solution to a problem efficiently, in other words if the problem is in the complexity class NP; does that also mean we can solve the problem efficiently, in other words is the problem also in the complexity class P? If the answer is yes, then we can obviously solve the bounded halting problem efficiently.
It's also possible, though somewhat more difficult, to see that if we can solve the bounded halting problem in polynomial time then this would imply P = NP. To see this, imagine we have an arbitrary NP problem and a program which verifies solutions to this problem for a given input in polynomial time, say bounded from above by a polynomial in the size of the input. Now consider the program defined by if (in other words if the solution is valid for the given input) and loops forever otherwise. is a program of length polynomial in and its runtime is bounded from above by for some constant . If the length of is less than then we add some padding to increase its length without changing its functionality.
Now, if we have a polynomial time solution to the bounded halting problem, we can simply give as an input to our solution and see if it halts in steps or not. Since is a polynomial in this computation would also take polynomial time in , and since by assumption if the program halts at all it halts in steps. Since checking if halts is equivalent to checking if the original problem has a solution, we've constructed an algorithm for solving any NP problem in polynomial time given that we can solve the bounded halting problem. In this situation we would say that the bounded halting problem is NP-complete, which means it's among the hardest problems in the class NP: any other NP problem can be reduced to the bounded halting problem.
The P = NP problem is of course the most famous open problem in computer science and it's one of the Clay Institute's Millennium Prize Problems. This discussion likely clears up why that's the case: P = NP is the natural "bounded" analog of the halting problem and a solution to it would imply that we can somehow efficiently gain understanding of what arbitrary programs are doing without having to simulate them on input data. This is most likely false, but if it were true it would revolutionize our understanding of how computer programs work. If the discussion so far has seemed too abstract; consider that finding short proofs of mathematical theorems, cracking most of online cryptography and training neural networks efficiently are all problems in NP. If you could find a good enough solution to the bounded halting problem you could do all of these and more.
Richard Borcherds gives 10% odds that P = NP and Scott Aaronson gives 2% to 3% odds. The Metaculus community currently agrees with Aaronson and I also think that his odds are in the right ballpark. What do you think? You can see the community forecast on this question here.
Finally, there is a question about when P = NP is going to be settled. The problem was formally defined in 1971, so we could say that it has been open for around 50 years at this point. The inside view is that, as with the Riemann hypothesis and some of the other Millennium Prize Problems, there's no particular reason to believe we'll have a solution anytime soon. The difficulty is that P = NP is most likely false, and if it's false we have to prove a lower bound on the time complexity of all possible solutions to (say) the bounded halting problem. In general it's very difficult to prove nontrivial lower bounds on the time complexity of a problem and so there has been very little progress on actually disproving P = NP, and this state of affairs appears unlikely to change anytime soon. The community forecast for this is available here.
Conclusion
Variations on the halting problem are of great theoretical interest in computer science, and assuming that we can solve them with good enough algorithms they are in fact of great practical interest as well. The basic question is, as stated at the beginning of the essay, whether we can gain some understanding of what computer programs are doing without having to simulate them. The original halting problem means we can't hope to do this in general, but even a little bit of understanding that's not sufficient to solve the halting problem could revolutionize the field of computer science.
5 comments
Comments sorted by top scores.
comment by AlexMennen · 2022-02-06T21:19:40.430Z · LW(p) · GW(p)
Notice that the argument still works just as well if we could compute some which had the property that for only finitely many , so this proves that in fact must grow faster than any computable function!
Not quite. It is true that must grow faster than any computable function, but what you've shown is just that there's no computable function growing at least as fast as . It's possible for functions to have incomparable growth rates, by taking turns for which is bigger infinitely many times.
In first-order logic (or any other complete logic) we have some guarantee that "every true statement is provable"
Not exactly. Every statement that's true of every model of the axioms is provable from the axioms. If you have a particular model in mind for your axioms, then there can be statements that are true in that model but not provable from the axioms. The trouble is just that the axioms can't fully specify the model.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2022-02-07T09:12:44.754Z · LW(p) · GW(p)
Not quite. It is true that f must grow faster than any computable function, but what you've shown is just that there's no computable function growing at least as fast as f. It's possible for functions to have incomparable growth rates, by taking turns for which is bigger infinitely many times.
True. I overlooked this for some reason, but it's not hard to give an argument for why must grow faster than any computable function, so it's not a major problem. I've edited the post to fix this.
Not exactly. Every statement that's true of every model of the axioms is provable from the axioms. If you have a particular model in mind for your axioms, then there can be statements that are true in that model but not provable from the axioms. The trouble is just that the axioms can't fully specify the model.
That isn't different from what I said. Keep in mind that I tried to keep the post as simple as possible, and if I had to define the concept of model for people to make sense of this statement it wouldn't fit well with that attitude.
comment by tailcalled · 2022-02-05T18:48:00.559Z · LW(p) · GW(p)
I think Rice's theorem would also be relevant to mention in these notes. It states that no nontrivial extensional property of programs is decidable. Here "extensional" means purely based on the input/output relation, and not dependent on runtime, code, or similar.
Replies from: ege-erdil↑ comment by Ege Erdil (ege-erdil) · 2022-02-05T19:01:15.017Z · LW(p) · GW(p)
Good point. I'll probably add it into the post later.
comment by bionicles · 2023-08-12T21:03:31.020Z · LW(p) · GW(p)
It’s incredibly disconcerting for so many brilliant thinkers to accept and repeat the circular logic about the paradoxical anti-halt machine “g” — if you make some g which contains f, then I can make f such that it detects this and halts. If you make some g which invokes f, then I can make f which detects this and halts. By definition of the problem, “f” is the outer main function and paradoxical “g” is trapped in the closure of f which would mean f can control the process, not g. The whole basis for both Gödel Incompleteness Theorems and the Halting Problem is based on this idea we can make some paradox machine that does the opposite of whatever we think, without continuing to consider if such recursive properties might be detectable.
A more difficult case would be the machines which loop forever and never hit a prior state, tape tuple, but even in this case, I truly believe the difference is ultimately decidable because an n-state Turing machine which does eventually halt after many steps would necessarily do so by some form of countdown or count up to a limit. This periodicity would presumably be apparent in the frequency domain of the state transition matrix. Also, we might try to connect the (state, tape) tuples (which are enumerable) to the complex plane, and then we could look them up on the Mandelbrot set to see if they converge or diverge. Perhaps we could pick some natural threshold of closeness to the edge of the set where we would categorize machines as being more difficult to analyze. Seems like it would be a natural connection…
(disclaimer, I’m a programmer, not a computational complexity research scientist in this area really, so I am likely wrong, I just think we ought to be wayyyy more skeptical about the bold claims of Gödel and Turing!)
WDYT?