6

当我想出一个似乎是解决它的通用算法时,我正在阅读子集和问题:

(defun subset-contains-sum (set sum)
    (let ((subsets) (new-subset) (new-sum))
        (dolist (element set)
            (dolist (subset-sum subsets)
                (setf new-subset (cons element (car subset-sum)))
                (setf new-sum (+ element (cdr subset-sum)))
                (if (= new-sum sum)
                    (return-from subset-contains-sum new-subset))
                (setf subsets (cons (cons new-subset new-sum) subsets)))
            (setf subsets (cons (cons element element) subsets)))))

“set”是一个不包含重复项的列表,“sum”是搜索子集的总和。“subsets”是一个 cons 单元的列表,其中“car”是一个子集列表,“cdr”是该子集的总和。只需将元素放在前面,就可以在 O(1) 时间内从旧子集创建新子集。

我不确定它的运行时复杂性是多少,但似乎随着每个元素“总和”的增长,“子集”的大小加倍,加一,所以在我看来至少是二次的。

我发布这个是因为我之前的印象是 NP 完全问题往往是棘手的,并且通常可以希望的最好的问题是启发式的,但这似乎是一个通用的解决方案,假设你有 CPU 周期,总是给你正确的答案。像这样可以解决多少其他 NP 完全问题?

4

5 回答 5

7

NP完全问题是可以解决的,只是不是在多项式时间内(据我们所知)。也就是说,一个 NP 完全问题可能有一个O(n*2^n)可以解决它的算法,但它不会有例如O(n^3)解决它的算法。

有趣的是,如果为任何 NP 完全问题找到了一个快速(多项式)算法,那么 NP 中的每个问题都可以在多项式时间内解决。这就是 P=NP 的意义所在。

如果我正确理解了您的算法(并且这更多地基于您的评论而不是代码),那么它等效于此处O(n*2^n)的算法。有子集,由于您还需要对每个子集求和,因此算法是.2^nO(n*2^n)

One more thing about complexity - the O(whatever) only indicates how well a particular algorithm scales. You cannot compare two algorithms and say that one is faster than the other based on this. Big-O notation doesn't care about implementation details and optimisations - it is possible to write two implementations of the same algorithm with one being much faster than the other, even though they might both be O(n^2). One woman making babies is an O(n) operation, but the chances are that this is going to take a lot longer than most O(n*log(n)) sorts you perform. All you can say based on this is that sorting will be slower for very large values on n.

于 2010-03-01T02:11:10.923 回答
5

All of the NP-complete problems have solutions. As long as you're willing to spend the time to compute the answer, that is. Just because there's not an efficient algorithm, doesn't mean there isn't one. For example, you could just iterate over every potential solution, and you'll eventually get one. These problems are used all over the place in real-world computing. You just need to be careful about how a big a problem you set for yourself if you're going to need exponential time (or worse!) to solve it.

于 2010-03-01T02:13:11.883 回答
3

I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic.

If the run-time doubles for each increase in N, you're looking at an O(2^N) algorithm. That's also what I'd expect from visiting all subsets of a set (or all members of the powerset of a set), as that's exactly 2^N members (if you include rhe empty set).

The fact that adding or not adding an element to all hitherto-seen sets is fast doesn't mean that the total processing is fast.

于 2010-03-01T08:35:54.177 回答
2

What is going on here could be expressed much more simply using recursion:

(defun subset-sum (set sum &optional subset)
  (when set
    (destructuring-bind (head . tail) set
      (or (and (= head sum) (cons head subset))
          (subset-sum tail sum          subset)
          (subset-sum tail (- sum head) (cons head subset))))))

The two recursive calls at the end clearly show we are traversing a binary tree of depth n, the size of the given set. The number of nodes in the binary tree is O(2^n), as expected.

于 2010-03-01T09:11:22.027 回答
0

It's karpreducible to polynomial time. Reduce with Karp reduction to a decision problem O(nM) using a heap or binary search upper bounds is log(M*2^M)=logM+log(2^M)=logM+Mlog2 Ergo Time:O(nM)

于 2010-03-22T02:25:09.803 回答