Are "non-computable functions" always hard to solve in practice?
post by Morpheus · 2021-12-20T16:32:25.118Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 9 davidad None 1 comment
I am trying to get a better understanding of the philosophical implications of non-computable functions, and this is some fix idea I had to which I don't know the answer: For a Problem Π: Π undecidable => Π infeasible to calculate even when making the domain of all the variables finite? (Clarification below)
One well known result in CS-Theory is that there is a large class of problems that are "undecidable" meaning there does not exist a Turing machine that can determine the answer to your problem for all natural numbers. One example would be the "busy beaver game" where you try to find the Turing machine with n different states that produces the largest amount of 1s on the tape. What is impressive about the busy beaver function, is that it explodes really quickly, and I believe the values are only known for values up to n=5. It is relatively easy to show that determining whether a Turing machine is a busy beaver is undecidable for an arbitrary n.
It recently realized though that the whole non-computability thing and the Church Turing Thesis for that matter is kinda cheap since we humans are actually not always interested in an algorithm that works for all natural numbers, and something that works for all n up to tree(3) or similar would be good enough. By the Myhill Nerode Theorem, every finite language is regular and thus decidable => any busy beaver that you could run on a finite Turing machine with tree(3) available memory slots (which would be equivalent to a finite state automaton) is decidable (I am not 100% percent sure this dodges something like rices theorem for regular Turing machines, which might imply that these automatons would be hard to find). In general, this suggests to me that for any problem where you actually have the hardware to solve it, there also exists the requisite "algorithm".
In principle, I would not see a concern to adding the assumption "and n < tree(4)" to your inductive hypothesis in your inductive proof that your algorithm is correct.
In practice, of course, it seems hard to leverage this assumption for anything other than brute force, and that what you end up with could be something other than a table of the correct solutions.
Answers
I would say no.
- Type-checking for many popular programming languages (e.g. Java, C++, C#, Rust, OCaml, Scala, Swift) is undecidable, and for practical purposes this fact is little more than a curiosity.
- The question of whether, given a finite POMDP, any policy exists that achieves at least zero expected reward is undecidable, and for practical RL this is not even much of a curiosity.
- Any function from that is well-defined is computable, because there exists a finite encoding of a lookup table. No Myhill–Nerode needed.
- I think a more compelling formalization of the Busy Beaver problem's hardness is that for , the value of is independent of ZF set theory. In some sense alone is uncomputable, even though it's just a single natural number rather than a function that has to work for infinitely many inputs.
↑ comment by Morpheus · 2021-12-20T19:18:42.023Z · LW(p) · GW(p)
Thanks for the examples and links! Hmm... the programming language examples definitely give me some more "grounding" for the concepts. I also just discovered your recent post [LW · GW], but to be honest a lot of it like the type two theory of effectivity is above my head at the moment.
1 comment
Comments sorted by top scores.
comment by Morpheus · 2022-01-02T04:48:59.327Z · LW(p) · GW(p)
I looked deeper into this and learned lots about meta-math. I also rediscovered logical Induction as ingenious and super interesting though I don't understand it deeply yet:
Similarly, given any polynomial-time method for writing down computer programs that halt, logical inductors learn to believe that they will halt roughly as fast as the source codes can be generated. Furthermore, given any polynomial-time method for writing down computer programs that provably fail to halt, logical inductors learn to believe that they will fail to halt roughly as fast as the source codes can be generated. When it comes to computer programs that fail to halt but for which there is no proof of this fact, logical inductors will learn not to anticipate that the program is going to halt anytime soon, even though they can’t tell whether the program is going to halt in the long run. In this way, logical inductors give some formal backing to the intuition of many computer scientists that while the halting problem is undecidable in full generality, this rarely interferes with reasoning about computer programs in practice.