Your Favorite NP-Complete Cheat

My favorite NP-problem and hack must be the Quake 3 engine’s sqrt() function.

http://www.codemaestro.com/reviews/9

Jeff: Unfortunately, not all NP-complete problems have good approximations.

The very definition of NP-complete shows this to be false; that a problem is NP-complete means all problems in NP can be reduced to this problem. If you have an algorithm for one NP-complete problem, you have an algorithm for all of NP.

While I and some of the other critics here have done a pretty good job covering where Jeff went wrong, I decided to give this a go myself: http://joeganley.com/2008/11/np-completeness-i.html … I plan to write further installments in the coming weeks on attacking NP-hard problems: heuristics, approximation algorithms, exact algorithms, restricted problem inputs, and general optimization algorithms (like simulated annealing).

We know exactly what NP-Complete means: it is defined as the set of decision problems that can be solved in polynomial time on a nondeterministic Turing machine.

P can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

We know exactly how to show that a problem is NP-Complete: to show that with polynomial time on a deterministic Turing machine, we can convert solving that problem into solving all NP-complete problems. Reducing a problem is another word for encoding it, and it’s NP-Complete if the encoding work is small.

bIf you wanted to, you could write a program to convert any np-complete problem to some program that would run in polynomial time on a non-deterministic Turing machine, and your program would run in polynomial time if you did it correctly./b

The definitive definition paper
http://www.cs.toronto.edu/~sacook/homepage/1971.pdf.gz

I was in a class where we did this, making the transition from 3-SAT to a nondeterministic Turing machine notation. It was boring.

Can’t you all see Jeff is just trying to provoke you? After all he’s quoting a cartoon as evidence in his comment.

I think this is a good strategy…

  1. pretend you really don’t get the point
  2. have a buch of foolproof explanations attached to your blog-post for free.

Well, have fun.

@Alex: just because Jeff has no regard for mathematical truth doesn’t mean that it isn’t worth our time if we do. (I don’t mean this negatively, necessarily. I think it’s a natural outcome of combining a practical, only-when-I-need it outlook with the strong opinions weakly held position. Just because it’s bullshit in the Harry Frankfurt sense, doesn’t mean it isn’t a fine way to go about things.)

I’ll answer the original question:

  • Simulated Annealing
  • Great Deluge
  • Genetic Algorithms

Writing the optimal blog post would have taken a very very long time.

Nice approximation, Jeff.

Poor old Jeff. He tries, he really does.

Well after the beating he’s taking by various over nerds I feel I ought to make him feel better.

Ok here goes.

I had absolutely no idea what you were talking about, and even less about what you’re detractors are talking about.

You’re not the only guy who doesn’t understand stuff Jeff

I think what just happened here is called market segmentation.
Some part of Jeff’s readers have a course in computability theory under their belt, so they are rightfully angry at the informal definition of a very formal entity.
The other part, whose lives were full without using the term NP-complete, may have gotten a glimpse of this interesting notion.
Kudos for this, Jeff.
Here’s Dorit Hochbaum’s book about approximations (not hacks) for NP-hard problems:
http://www.ieor.berkeley.edu/~hochbaum/html/book-aanp.html
(I guess this is more for the first crowd…)

Hey Jeff,
Cool cartoon.

No, I do not take what you read as gospel, anyone who takes a single opinion from the interweb, and embarks upon a course of action based upon that, deserves whatever they get.

I read this blog because I find it entertaining, and sometimes it prompts me to go and learn something new, or revisit something that I had long since forgotten.

I am not going to defend you as a mathematician or computer scientist, as that is not the point of this blog. Had you posted this in a scientific journal, then it may well be appropriate for others to take such vocal umbrage at your lack of academic rigour.

Having inherited a few systems from mathematicians and academics, and suffered the horrors of making them work in the real world, that is not a perspective that I seek unless I have to.

This is edutainment, has never pretended to be anything else, and there is nothing wrong with that.

Like I said, cool cartoon. Works for a code monkey like me.

Can someone solve this?

a=b
so, a^2 = ab
so, a^2 - b^2 = (a
b) - b^2
factor: (a+b)(a-b) = b*(a-b)
cancel: (a+b) = b
so, 2b = b
so, 2 = 1

No Googling!

division by zero exception

Umm, you definitely CAN define that a problem is NP hard or complete. It’s called a proof. If you can prove that a problem is NP*, you can thus prove to yourself (or others) that the actual solution is intractable. So you can stop trying to find one, and focus on
a.) redefining the problem
b.) using approximation algorithms

It’s not about rigor. It’s completely wrong. You are dumber as a result of having read it.

Is it just me, or has this blog turned into ads for books lately?

Travelling SalesPerson is one of my favorite examples of why you should spend some time refining the requirements of you problem when you run into something that might be NP-Complete. Not only is the TSP problem easy to approximate, only the general case of TSP is NP-Complete. TSP when all of the points are co-planar (a pretty common case on the planet Earth) is not NP-Complete.

Jeff Atwood: I guess I make a distinction between reducing problem to another known NP-complete problem and proving a problem is NP-complete. Apparently we can’t prove anything is NP-complete, we can only throw a wall of experts at it and see them all fail.

This is incorrect - the Cook-Levin theorem (http://en.wikipedia.org/wiki/Cook%27s_theorem) proves that the problem 3-SAT is NP-Complete. My understanding of the history is that Cook published in 1971; Levin’s work was done around the same time, but not published until a bit later, and it didn’t become so well known in the West, as Levin is Russian. For that reason, the result is often known as Cook’s theorem.

Since that time, thousands of problems have been proved NP-complete, usually using a problem known to be NP-Complete, like 3-SAT, and a reduction. Garey and Johnson contains several hundred such problems, and many more are known today.

What is not yet known is how to prove that NP-Complete problem cannot be solved in polynomial time. If that could proved, it would solve the famous P vs NP problem.

(That aside, thanks for your blog, which I greatly enjoy.)

I very seldom see much acknowledgement that there are lots of complexity classes far far higher than NP (and thus problems much much harder than NP-complete ones), and that in practice the limits of worst-case tractibility are substantially lower than NP: algorithms which run slower than O(n^2) are seldom tolerable in practice.

In many cases there’s no such thing as an approximate solution: you can approximate a best path, but figuring out whether or not there is any path satisfying some set of conditions is all or nothing. In those cases, the best approach is to set aside the worst case and focus on realistic cases. There are plenty of very hard problems (NExpTime and higher) which can be solved in time close to O(n log n) on the kinds of data that tend to be encountered in practice.

If I were a manager and programmer came to me saying this problem is in such a high complexity class it would be impossible to solve! I’d encourage the programmer to identify patterns in the type of data we expect that could be exploited for optimization. A formal definition of this tractable fragment of the original problem may sometimes be difficult to codify, but that’s beside the point if you can solve the problems you encounter. It’s easy to phrase any easy problem as a subset of a much harder problem; the trick is to identify the truly necessary bits of the problem and see how hard that part is on its own.

The criticism I’d level at high-complexity research is that academics can dwell on the really hard problems (because they’re interesting) and skip quickly over simplifications (because they’re trivial and not worthy of study).

Having scanned over the other comments, it seems that a lot of people are complaining about the article again. Well, I don’t. It might not be the most accurate description but it brings the point across (and I just like the analogy). Plus, some of the comments are sooo much worse. Consider the very first one: “… because … nearly any other NP problem can be reduced to [3-SAT].” This reveals a complete misundestanding of NP-completeness and the concept of reduction.

With that out of the way, I want to answer your question. I’m undecided between three very different cheats. The first uses cutting planes to literally “cut away” parts of the (mathematical formulation of the) problem to make it easier. I like it of its practical importance. Take TSP. Nowadays, this problem has very good solvers that use this approach (often combined with branch-and-bound to yield branch-and-cut), combined with raw computation power, to tackle even extremely large problem instances. For instance, CONCORDE has found the exact solution for the 24,978 cities of Sweden.

The second approach, Lagrangean relaxation just has to have a place in the list because of its elegant mathematical derivation. By applying a Lagrangean multiplier (that can be found easily) to one term of the problem, the overall runtime of the problem can be drastically decreased.

The last cheat is ant colony optimization. The underlying analogy is just fascinating. Also, implementing this solution is very easy and just plain fun. With the right sets of parameters, this heuristic often leads to very good results. Unfortunately, there’s no guarantee for good results which is simply unacceptable for many problems.