原始问题和简单算法
给定一组关系,例如
a < c
b < c
b < d < e
什么是找到一组从 0 开始的整数(以及尽可能多的重复整数!)匹配关系集的最有效算法,即在这种情况下
a = 0; b = 0; c = 1; d = 1; e = 2
简单的算法是重复迭代关系集并根据需要增加值,直到达到收敛,如下面在 Python 中实现的:
relations = [('a', 'c'), ('b', 'c'), ('b', 'd', 'e')]
print(relations)
values = dict.fromkeys(set(sum(relations, ())), 0)
print(values)
converged = False
while not converged:
converged = True
for relation in relations:
for i in range(1,len(relation)):
if values[relation[i]] <= values[relation[i-1]]:
converged = False
values[relation[i]] += values[relation[i-1]]-values[relation[i]]+1
print(values)
除了 O(Relations²) 复杂度(如果我没记错的话),如果给出无效关系(例如添加 e < d),该算法也会进入无限循环。对于我的用例来说,检测这样的失败案例并不是绝对必要的,但会是一个不错的奖励。
基于 Tim Peter 评论的 Python 实现
relations = [('a', 'c'), ('b', 'c'), ('b', 'd'), ('b', 'e'), ('d', 'e')]
symbols = set(sum(relations, ()))
numIncoming = dict.fromkeys(symbols, 0)
values = {}
for rel in relations:
numIncoming[rel[1]] += 1
k = 0
n = len(symbols)
c = 0
while k < n:
curs = [sym for sym in symbols if numIncoming[sym] == 0]
curr = [rel for rel in relations if rel[0] in curs]
for sym in curs:
symbols.remove(sym)
values[sym] = c
for rel in curr:
relations.remove(rel)
numIncoming[rel[1]] -= 1
c += 1
k += len(curs)
print(values)
目前,它要求将关系“拆分”(b < d 和 d < e 而不是 b < d < e),但循环检测很容易(只要curs
是空且 k < n),应该可以更有效地实施它(特别是如何确定curs
和curr
确定)
最坏情况时序(1000 个元素,999 个关系,倒序):
Version A: 0.944926519991
Version B: 0.115537379751
最佳情况时序(1000 个元素,999 个关系,前向顺序):
Version A: 0.00497004507556
Version B: 0.102511841589
平均案例时间(1000 个元素,999 个关系,随机顺序):
Version A: 0.487685376214
Version B: 0.109792166323
测试数据可以通过
n = 1000
relations_worst = list((a, b) for a, b in zip(range(n)[::-1][1:], range(n)[::-1]))
relations_best = list(relations_worst[::-1])
relations_avg = shuffle(relations_worst)
基于 Tim Peter 的答案的 C++ 实现(简化为符号 [0, n) )
vector<unsigned> chunked_topsort(const vector<vector<unsigned>>& relations, unsigned n)
{
vector<unsigned> ret(n);
vector<set<unsigned>> succs(n);
vector<unsigned> npreds(n);
set<unsigned> allelts;
set<unsigned> nopreds;
for(auto i = n; i--;)
allelts.insert(i);
for(const auto& r : relations)
{
auto u = r[0];
if(npreds[u] == 0) nopreds.insert(u);
for(size_t i = 1; i < r.size(); ++i)
{
auto v = r[i];
if(npreds[v] == 0) nopreds.insert(v);
if(succs[u].count(v) == 0)
{
succs[u].insert(v);
npreds[v] += 1;
nopreds.erase(v);
}
u = v;
}
}
set<unsigned> next;
unsigned chunk = 0;
while(!nopreds.empty())
{
next.clear();
for(const auto& u : nopreds)
{
ret[u] = chunk;
allelts.erase(u);
for(const auto& v : succs[u])
{
npreds[v] -= 1;
if(npreds[v] == 0)
next.insert(v);
}
}
swap(nopreds, next);
++chunk;
}
assert(allelts.empty());
return ret;
}
具有改进的缓存局部性的 C++ 实现
vector<unsigned> chunked_topsort2(const vector<vector<unsigned>>& relations, unsigned n)
{
vector<unsigned> ret(n);
vector<unsigned> npreds(n);
vector<tuple<unsigned, unsigned>> flat_relations; flat_relations.reserve(relations.size());
vector<unsigned> relation_offsets(n+1);
for(const auto& r : relations)
{
if(r.size() < 2) continue;
for(size_t i = 0; i < r.size()-1; ++i)
{
assert(r[i] < n && r[i+1] < n);
flat_relations.emplace_back(r[i], r[i+1]);
relation_offsets[r[i]+1] += 1;
npreds[r[i+1]] += 1;
}
}
partial_sum(relation_offsets.begin(), relation_offsets.end(), relation_offsets.begin());
sort(flat_relations.begin(), flat_relations.end());
vector<unsigned> nopreds;
for(unsigned i = 0; i < n; ++i)
if(npreds[i] == 0)
nopreds.push_back(i);
vector<unsigned> next;
unsigned chunk = 0;
while(!nopreds.empty())
{
next.clear();
for(const auto& u : nopreds)
{
ret[u] = chunk;
for(unsigned i = relation_offsets[u]; i < relation_offsets[u+1]; ++i)
{
auto v = std::get<1>(flat_relations[i]);
npreds[v] -= 1;
if(npreds[v] == 0)
next.push_back(v);
}
}
swap(nopreds, next);
++chunk;
}
assert(all_of(npreds.begin(), npreds.end(), [](unsigned i) { return i == 0; }));
return ret;
}
C++ 计时 10000 个元素,9999 个关系,平均超过 1000 次运行
“最坏的情况下”:
chunked_topsort: 4.21345 ms
chunked_topsort2: 1.75062 ms
“最好的情况”:
chunked_topsort: 4.27287 ms
chunked_topsort2: 0.541771 ms
“平均情况”:
chunked_topsort: 6.44712 ms
chunked_topsort2: 0.955116 ms
与 Python 版本不同,C++chunked_topsort
很大程度上取决于元素的顺序。有趣的是,随机顺序/平均情况是迄今为止最慢的(使用基于集合的chunked_topsort
)。