Thursday, March 01, 2007

Does P = NP?

Notes on computational complexity, from P, NP and Mathematics by Avi Wigderson (excerpts and notes):

"The language of algorithms is slowly becoming competitive with the language of equations and formulas (which are special cases of algorithms) for explaining complex mathematical structures."

In computational complexity, time is measured as the number of elementary operations performed for a given computation. Time is the primary "resource" of algorithms when studying their efficiency.

The equivalence notion in computational complexity -- any understanding we have of one problem can be simply translated into a similar understanding of the other. "The translating functions 'f' and 'h' are called reductions. We capture the simplicity of a reduction in computational terms."

"The asymptotic viewpoint is inherent to computational complexity theory...it reveals structure which would be obscured by finite, precise analysis."

"Efficient computation (for a given problem) will be taken to be one whose runtime on any input of length 'n' is bounded by a polynomial function in 'n'." This is class 'p', at most {An^c}. Polynomial functions typify "slowly growing" functions. Many important natural problems cannot at present be solved faster than in exponential time. Reducing their complexity to (any) polynomial will be a huge conceptual improvement.

The class 'np' contains all properties 'C' for which membership have short, efficiently verifiable proofs. The verification must be checked in polynomial time. Intuitively, this class is of the type that a successful completion can be easily recognized. This class is extremely rich, thousands upon thousands of problems in which arise naturally out of different necessities. 'np' always have trivial exponential algorithms, which is the brute-force way of searching all possible answers and verifying them.

If P = NP, then the implications are colossal: every instance of these tasks can be solved, optimally and efficiently. Is it possible that for every task for which verification is easy, finding a solution is not that much harder? What about the leap of creativity it takes to find a solution? how do you automate that?

The definition of class P is symmetric, the definition of class NP is asymmetric -- i.e. it is not always the case that it is easy to both certify that an object does have a property C, and certify that an object does not have property C (coNP -- if complement is also in NP).

If C can be reduced into D efficiently, then C<- D, and if D is in P, then C is in P. This means that for this case, solving the classification problem C is not computationally that much harder than solving the classification problem D. In some cases this means the importability of techniques from one area to another. The power of reductions is to relate seemingly unrelated notions. Efficient reductions are the backbone of computational complexity.

Hardness and completeness -- a problem D is called c-hard if for every 'C is in c' we have C <- D. If we further have that 'D is in c', then D is called c-complete. In other words, if D is c-complete, it is the hardest problem in class 'c': if we manage to solve D efficiently, we have done so for all other problems in 'c'. If a class has many complete problems, by definition they all have essentially the same complexity. If we manage to prove that any of them cannot be efficiently solved, we have managed to do so for all of them.

NP-complete problems are everywhere, and permeate all branches of science. Finding out a problem is NP-complete usually leads to leaving it alone and searching elsewhere for algorithms. The implication, however, is huge for anybody who finds an algorithm that solves just one NP-complete problem, because that algorithm can be translated to solve all others. Also, there is a near lack of natural objects in the huge void of NP problems that are neither P nor NP complete. This raises wonder about dark matter.

NP-complete is a formal stamp of difficult rather than a mere general feeling. It seems, then, we have two equivalent classes, P for those problems that can be efficiently solved, and NP-complete for those that can't (if P=NP then they are the same). If P is not equal to NP, though, then NP-complete has infinite levels of difficulty. If a problem is both NP and coNP, then it cannot be NP-complete.

Algorithms for natural problems under natural input distributions are almost all NP-complete.

The thrust of lower-bound arguments is that more time buys more computational power, such that there are functions computable in time n^3 that are not for time n^2. The implication is a "universal algorithm" which can simulate every other algorithm with only a small loss in efficiency.

Any natural proof of a lower bound implies subexponential algorithms for inverting one-way functions. The ability to describe a property both universally and existentially constitutes necessary and sufficient conditions -- a holy grail of mathematical understanding.

The artificial upper bound for algorithm is exponential, but this is only because we are inherently bounded by time.

Almost all natural phenomena, to be accurately represented by an algorithm (abstract process), have exponential curves of time as functions of linear increases in inputs.

Finding is hard, verifying is easy: natural selection rule?

0 Comments:

Post a Comment

<< Home