Computational Limits on Efficiency
post by vibhumeh · 2025-01-21T18:29:36.997Z · LW · GW · 1 commentsContents
1 comment
In this article I will attempt to explore what can be derived if we assume there are no Computational Limits on Efficiency. However, before we get into that, let's first have a short introduction to what `Computational Efficiency` actually means.
"Efficient programming is programming in a manner that, when the program is executed, uses a low amount of overall resources pertaining to computer hardware. " ~geeksforgeeks
This is unfortunately a weak definition, and fails to cover the picture of what computational efficiency really means. I could not find a better definition online, so I will now attempt to write my own definition of Computational Efficiency:
"Computational Efficiency of an algorithm refers to how well the algorithm succeeds in minimizing the expense of computational resources, and time. An algorithm is considered optimal if no better asymptotic time complexity exists for solving the same problem."
This definition gives a good, concise overview. To aid understanding for those unfamiliar, I will give a couple short examples of what computationally efficient and inefficient problems look like. Assume we have a list of n integers,
it could look like this: x=[1,8,42,3,77,89,44]
and our goal is to find the position of the first occurrence of some given element(assume for this example the element is 44. Now, an inefficient algorithm for this would be to first subtract the number (in our case 44) from all the elements one by one, then again iterate through it searching for the first element with the value 0 in the new list. The time complexity of this would be O(2n) as in the worst case scenario we will need to take 2n iterations (If you remember, n is the length of the list. In our example n=7) to find the location of the element, n iterations to subtract, and n to find the first zero element.
Now, it's very easy to figure out how we can increase the efficiency of our algorithm, i.e reduce the computation required to solve the same problem by decreasing the steps required. Instead of subtracting the target number from each element in our first element, we can (obviously) directly compare each element until we find the target element in the list. This has a worst-case time complexity of O(n) which means that in the worst case we will require n steps to solve the problem given this algorithm.
Huh, that was easy, right? We found an algorithm that was twice as efficient as our old algorithm just by following our intuition... so can't computation always become more efficient? Are we not only limited by the ingenuity of the programmer? Then I challenge you, given the same problem of finding the first occurrence of a target element in any unordered, randomly indexed list can you give me an algorithm with a lower worst-case time complexity than O(n).
Tried? You must have failed, since as far as we can prove by example, there is no better algorithm for this problem.This is a perfect example of why there are computational limits on efficiency, there exist certain computational steps that must be taken, i.e there exist computations for which there are no shortcuts. But can you prove this? It sounds intuitive yet we have no solid proof yet. Let's look at our example, the most important keyword I highlighted is that the list must be 'unordered' but what does that mean? It means that the location of each element is random, and lacks any pattern. However, does true randomness exist? If not, then the problem can definitely be solved with a worst-case time complexity of less than O(n). You can clearly see the difficulty in conclusively proving the existence of this limit.
To give another example, let's look at a sorted/ordered list. For simplicity, let's assume our example list is sorted in ascending order.
list=[1,3,8,42,44,77,89]
Now, using our older so-called 'efficient' algorithm we will still get the same worst-case time complexity of O(n), for the case where the number turns out to be the largest.
However, now we have a new 'most efficient' algorithm called binary search. Binary search is simple, we start from the middle, if the middle element is greater than the element we are looking for, we split the list at that element and only take the left side of the list (for our case of ascending order), and if it is smaller, we take the right half of the list. Now we repeat the process with the new segment until the latest 'middle' element is equal to the element we are looking for. The time complexity for this is O(log N)
now you will see my arguments for what behavior we may observe if all computations turn out to be irreducible. Other than those already proven to be irreducible, think of it as a thought experiment as to what could be.
Now we know two things:
A) What are computational limits on efficiency
B) Why it's so difficult to prove it to be true even though it seems intuitive
I have an interesting image of computational efficiency in mind, imagine you are standing in an infinite grid, each square 1mX1m. you are at position A, and the 'solution' is in another grid box B. To travel from A to B you have infinite routes, each square you travel to reach the 'solution' B are the computational 'steps'. there are infinite distances that you could travel to reach the B, but there is only one optimal route, and the distance on that route is the 'displacement' between A and B in our analogy which would refer to the optimal algorithm. Now, if computational reducibility has no fundamental limit, does that imply all solutions are just an adjacent square away? Or do some problems require paths with unavoidable complexity?
My goal now is to question: what if for some situations there is no fundamental limit on computational efficiency? If all computation is reducible, then we should expect that every algorithm has an infinite path toward improvement. If reduction follows a linear trend—where complexity decreases at a predictable rate—then it seems reasonable to assume that the limiting time complexity could trend toward O(1). However, this assumption raises an important question: Does reducibility necessarily lead to O(1), or could the process stabilize at some other lower bound, such as O(log n)?
Now, since we concluded that in our situation, all computation must be reducible to a single step, we must now ask ourselves what this one step could be And that one step must be *getting* the answer i.e, the single step is getting the answer/solution directly from our initial problem.
The first (and perhaps naïve) solution that came to my mind was an infinitely large ideal hash table, one that precomputes every possible problem and stores its solution for instant retrieval. It must be an ideal hash table allowing O(1) lookups. However, hashing itself involves computational steps, so it still fails to perfectly represent what we need. (note: since this table would require infinite memory, it is practically impossible, however let us entertain this possibility since we are looking at the question from a purely computational perspective). This example contains other flaws too, for one to have such a hash table one needs to compute every solution stored in the hash table in the first place, however I would like to argue that whatever preprocessing is done before the question is known is irrelevant, and only processing done after knowing the question matters, but this is a shaky argument so I would like to abandon this line of reasoning. Despite these flaws, I choose to present this idea because even in its imperfection, it raises an interesting question: Is there a way to achieve true O(1) computation without relying on brute-force pre-computation? Can we find a model where retrieval is instantaneous? Since the most obvious way to solve something in a single step is to already know the solution beforehand, this suggests that O(1) computation would require either precomputation or a fundamentally different approach to problem-solving.
Now I ask you, the reader, to suggest some alternative O(1) approaches I may have overlooked, and also maybe show us why O(1) is impossible for certain cases, though as we know, to prove a negative is very difficult.
1 comments
Comments sorted by top scores.
comment by Viliam · 2025-01-26T20:53:08.781Z · LW(p) · GW(p)
Probably not what you want, but another option is caching the results after computing them for the first time. If you have large enough memory, and you get the same questions asked repeatedly, only the first time will be expensive, and afterwards it will be O(1), which means that cost will converge to O(1) over time.