Posts
Comments
The analogue of for randomized polynomial-time algorithms is actually (or equivalently ), assuming that the intended meaning here is "-complete problems cannot be solved in polynomial time, even with randomized algorithms".[1] More information about these classes are available here.
Also, a bit of a nitpick, but it looks like the claim that these widely-believed complexity-theoretic assumptions imply
that there is no way, in polynomial time, to get reliable information about A’s secret circuit C (beyond some statistical regularities) from looking at polynomially many input-output samples of C.
is implicitly claiming that a "learning problem" similar to the following is -hard:
There is a hidden function that takes bits as input and outputs one bit, and can be described with a polynomial-sized circuit. You are given some number of input-output pairs . Find a polynomial-sized circuit that is consistent with the given inputs and outputs.
It seems likely that this learning problem is indeed computationally difficult, given that the naive approach requires brute-force search through circuits. However, to actually prove -hardness, there needs to be a reduction from (say) to that problem. I don't see any straightforward approach to construct such a reduction, even though it is not entirely implausible that one exists (possibly with minor modifications to the problem statement).
I do agree with the conclusion though that the problem being discussed here is very different from what is being captured by statements like .
- ^
The statement actually means "randomized polynomial-time algorithms cannot be 'derandomized' by replacing them with equivalent deterministic polynomial-time algorithms".