Are computationally complex algorithms expensive to have, expensive to operate, or both?

post by Noosphere89 (sharmake-farah) · 2023-06-02T17:50:09.432Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    7 Vanessa Kosoy
    2 Dagon
    2 Dave Orr
None
1 comment

Specifically, let's say you are handed a Boolean 3-sat problem, and you finally managed to finish solving the 3-SAT instance you are given by a superpolynomially large algorithm.

Now, you are given another Boolean 3-SAT problem. Can you amortize the complexity costs of 3-SAT problems, or does each 3-SAT problem instance require you to pay the full complexity cost of the algorithm you run?

To give an analogy for the question I'm asking, I'm trying to determine whether computationally hard problems are more CapEx dominated, and the OpEx of running each particular instance of 3-sat is low, making it more like an investment, or perhaps buying things, or is it more like a high OpEx, where each instance of a 3-SAT problem remains just as expensive and can't be amortized, much like renting something.

Equivalently, the question is how much you are able to amortize the costs of solving similar problems, like 3-SAT for NP-complete problems or True Quantified Boolean Formulas for PSPACE-complete problems.

Challenge: If you are able to show that you can reduce computational complexity costs via amortizing the instances of a problem, how far up the complexity hierarchy does this go? How complex does a problem need to be before you can't amortize the costs of a computationally complex problem anymore?

Answers

answer by Vanessa Kosoy · 2023-06-11T15:08:54.487Z · LW(p) · GW(p)

A relevant classical result in complexity theory is: if  then the polynomial hierarchy collapses. Which is to say, this is pretty unlikely. This means that you can't solve  in time which is polynomial in the size  of the circuit, even if you had unlimited time to precompute an advice string for the given  that you need.

answer by Dagon · 2023-06-02T18:43:48.432Z · LW(p) · GW(p)

"over sufficiently long timeframes, all costs are variable costs".   CapEx vs OpEx is an accounting distinction, and a short-term optimization question.  It's not fundamental.

How this works in practice, for large computational challenges, is that there is a curve for how much advance work reduces the per-iteration cost of the actual results.  Where you want to be on this curve depends on the specifics of the prep (algorithmic improvements, better caching or parallelism (note: these are often optimizing in opposite directions), hardware improvements or tuning, etc.).  Each of these dimensions has it's own curve, and we often don't know in advance what the curve actually looks like, so there's a meta-curve we guess at about R&D vs implementation excellence for each topic.

It gets very complex very quickly, in ways that's difficult to numerically model, because different problems are JUST different enough not to reuse some of the investments OR meta-investments.

As I often say at work, "that's why they pay us the medium bucks".

answer by Dave Orr · 2023-06-02T18:16:48.140Z · LW(p) · GW(p)

In general to solve an NP complete problem like 3-SAT, you have to spend compute or storage to solve it. 

Suppose you solve one 3-SAT problem. If you don't write down the solution and steps along the way, then you have no way to get the benefit of the work for the next problem. But if you do store the results of the intermediate steps, then you need to store data that's also polynomial in size.

In practice often you can do much better than that because the problems you're solving may share certain data or characteristics that lead to shortcuts, but in the general case you have to pay the cost every time you need to solve an NP complete problem.

comment by Noosphere89 (sharmake-farah) · 2023-06-02T18:50:50.949Z · LW(p) · GW(p)

So it means that you can't gain cost advantages in general by solving other same or similarly computationally complex problems?

1 comment

Comments sorted by top scores.

comment by gwern · 2023-06-02T19:13:26.316Z · LW(p) · GW(p)

This is a difficult question to answer, of course, because you're asking such a broad question and it's going to depend on stuff like the actual distribution rather than worst-case, unless you define a class so narrow that each instance offers some gains on every other instance, which is unusual. No-free-lunch, etc. Sometimes there are big savings (this comes up in cryptography where entities like the NSA can attack the world for a lot cheaper than you would think based on the computational cost of individual attacks), but usually there isn't - AFAIK there is no general way to solve 1 arbitrary 3-SAT problem and get a gain on another. Only if there is some sort of similarity, say, because they are all based on real-world problems (which is much narrower than 'every logically possible problem'), can you hope to, say, distill the search into a neural net blackbox and amortize gains.

You probably want to look into algorithmic information theory; something like OOPS would be relevant here because it can solve every problem, building on Levin search, and attempts to share or amortize between multiple problems. (This also more closely resembles what you're really interested in, which is doing this online as you solve problems.)