1

实际上,我已经对这个问题有了部分答案,但我想知道是否可以将这一小段贪婪的代码推广到更接近最佳解决方案的东西。

我是如何遇到这个问题的(与问题本身无关,但可能很有趣):

我收到了大量的对象(这是一组堤坝的轮廓,每个堤坝沿其长度保持或多或少相同的形状),我可以根据属性(堤坝的名称)对它们进行分组。我的程序输出到一个外部程序,我们必须手动调用它(不要问我为什么),它不能从失败中恢复(一个错误会停止整个批处理)。

在我使用它的应用程序中,对垃圾箱的数量和垃圾箱的最大尺寸没有硬性要求,我尝试做的是

  • 保持低组的数量(多次调用程序。)
  • 保持较小的集合(如果批次失败,减少损坏)
  • 把相似的东西放在一起(一个小组的失败可能是整个小组的失败)。

我没有太多时间,所以我写了一个小贪心函数,将集合组合在一起。

一位同事建议我可以在数据中添加一些噪音来探索我找到的近似解的邻域,我们想知道找到的解离最优还有多远。

并不是说它与原始任务相关,它不需要真正的最佳解决方案,但我想我会与社区分享这个问题,看看它会产生什么评论。

def group_to_similar_sizes(orig, max_size=None, max_factor=None):
    """group orig list in sections that to not overflow max(orig) (or given max_size).

    return list of grouped indices, plus max effective length.

    >>> group_to_similar_sizes([1, 3, 7, 13])
    ([[2, 1, 0], [3]], 13)
    >>> group_to_similar_sizes([2, 9, 9, 11, 12, 19, 19, 22, 22, ])
    ([[3, 1], [4, 2], [5], [6, 0], [7], [8]], 22)

    result is independent of original ordering
    >>> group_to_similar_sizes([9, 19, 22, 12, 19, 9, 2, 22, 11, ])
    ([[3, 1], [4, 2], [5], [6, 0], [7], [8]], 22)

    you can specify a desired max size
    >>> group_to_similar_sizes([2, 9, 9, 11, 12, 19, 19, 22, 22, ], 50)
    ([[3, 2, 1], [6, 5, 4], [8, 7, 0]], 50)

    if the desired max size is too small, it still influences the way we make groups.
    >>> group_to_similar_sizes([1, 3, 7, 13], 8)
    ([[1], [2, 0], [3]], 13)
    >>> group_to_similar_sizes([2, 9, 9, 11, 12, 19, 19, 22, 22, ], 20)
    ([[1], [3, 2], [4, 0], [5], [6], [7], [8]], 22)

    max size can be adjusted by a multiplication factor
    >>> group_to_similar_sizes([9, 19, 22, 12, 5, 9, 2, 22, 11, ], max_factor=0.75)
    ([[2], [3], [4, 1], [5, 0], [6], [7], [8]], 22)
    >>> group_to_similar_sizes([9, 19, 22, 12, 5, 9, 2, 22, 11, ], max_factor=1.5)
    ([[2, 1], [6, 5], [7, 3, 0], [8, 4]], 33)
    """

    ordered = sorted(orig)
    max_size = max_size or ordered[-1]
    if max_factor is not None:
        max_size = int(max_size * max_factor)

    orig_ordered = list(ordered)
    todo = set(range(len(orig)))
    effective_max = 0

    result = []
    ## while we still have unassigned items
    while ordered:
        ## choose the largest item
        ## make it member of a group
        ## check which we can still put in its bin

        candidate_i = len(ordered) - 1
        candidate = ordered.pop()
        if candidate_i not in todo:
            continue
        todo.remove(candidate_i)

        group = [candidate_i]
        group_size = candidate

        for j in sorted(todo, reverse=True):
            if orig_ordered[j] + group_size <= max_size:
                group.append(j)
                group_size += orig_ordered[j]
                todo.remove(j)

        result.insert(0, group)
        effective_max = max(group_size, effective_max)

    return result, effective_max
4

1 回答 1

0

I like the idea of your colleague to add some noise data, But may be it's better to make a few swaps in ordered after you call ordered = sorted(orig)?

于 2011-05-18T13:48:26.760 回答