Saturday, March 14, 2020

NP Doesn't Stand For "No Problem"

There are some types of math problems that are so easy to explain, but so difficult to solve, that our best supercomputers might not solve them before the universe dies out.  One class of these problems is NP, or "nondeterministic polynomial time".  I worked with "polynomial time" problems in my masters' Discrete Optimization course, and learned about NP problems in that same course.  Thus I read this article with interest:
Imagine you’re a thief robbing a museum exhibit of tantalizing jewelry, geodes and rare gems. You're new at this, so you only brought a single backpack. Your goal should be to get away with the most valuable objects without overloading your bag until it breaks or becomes too heavy to carry. How do you choose among the objects to maximize your loot? You could list all the artifacts and their weights to work out the answer by hand. But the more objects there are, the more taxing this calculation becomes for a person—or a computer.

This fictional dilemma, the “knapsack problem,” belongs to a class of mathematical problems famous for pushing the limits of computing. And the knapsack problem is more than a thought experiment. “A lot of problems we face in life, be it business, finance, including logistics, container ship loading, aircraft loading — these are all knapsack problems,” says Carsten Murawski, professor at the University of Melbourne in Australia. “From a practical perspective, the knapsack problem is ubiquitous in everyday life"...

“The problem the theoreticians started to look at was how efficiently a particular task can be carried out on a computer,” writes Keith Devlin in the book The Millennium Problems. For example: Given a list of 1 million museum artifacts with their weights and monetary values, and a backpack limited to 25 pounds, a computer would have to run through every possible combination to generate the single one with the most lucrative haul. Given an indefinite amount of time, a computer could use brute force to optimize large cases like this, but not on timescales that would be practical.

“We think you could cover the entire Earth with processors and run them until the heat death of the universe and still fail to solve relatively small instances of appropriate versions of these problems,” says Noah Stephens-Davidowitz, a Microsoft Research Fellow at the Simons Institute in Berkeley, California.

Some NP problems like the knapsack example have a special property: In the early 1970s, Stephen Cook and Richard Karp showed that a variety of NP problems could be converted into a single problem of formal logic. Therefore, if one could be solved and verified efficiently with an algorithm, they all could. This property is known as “NP completeness.”

One of the most stubborn questions in computer science and mathematics is whether these “NP” problems, including the knapsack problem, are truly different from “P” problems, those that can be solved in what is called polynomial time. If P=NP, then it’s possible to solve every problem whose solutions are easy to verify, says Stephens-Davidowitz. So, if this inequality persists, the general knapsack problem will always be hard.
Thus far, solutions elude us. Click on the link to read how these problems may relate to cryptography and may someday be solved by quantum computers.

No comments: