(a,b), (c,d), (e,f), (g,h) => (b, c), (d, e), (f, g)
所以我们省略第一个和最后一个并处理反向段
我第三次打这个,总是我写自己的代码......
(a,b), (c,d), (e,f), (g,h) => (b, c), (d, e), (f, g)
所以我们省略第一个和最后一个并处理反向段
我第三次打这个,总是我写自己的代码......
collections
在或中都没有这样的东西itertools
。
可以chain.from_iterable
成对,用 删除第一个和最后一个元素islice
,然后使用grouper
配方将它们变成对,但您仍然需要自己定义grouper
函数。
我会为此定义我自己的优化实用程序函数。
def shift(pairs):
it = iter(pairs)
(_, p) = next(it)
for (n, p2) in it:
yield (p, n)
p = p2
You can do this:
>>> tups=[('a','b'), ('c','d'), ('e','f'), ('g','h')]
>>> zip(*[iter([e for t in tups for e in t][1:-1])]*2)
[('b', 'c'), ('d', 'e'), ('f', 'g')]
The code I posted is most assuredly NOT 'an order of magnitude slower' than using itertools or KennyTM's solution:
from __future__ import print_function
import itertools
from collections import defaultdict
li=[('a','b'), ('c','d'), ('e','f'), ('g','h')]
def t1(tups):
return zip(*[iter([e for t in tups for e in t][1:-1])]*2)
def t2(tups):
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx
args = [iter(iterable)] * n
return itertools.izip_longest(fillvalue=fillvalue, *args)
flat=itertools.chain.from_iterable(tups)
sl=itertools.islice(flat,1,len(tups)*2-1)
return list(grouper(sl,2))
def t3(pairs):
it = iter(pairs)
(_, p) = next(it)
for (n, p2) in it:
yield (p, n)
p = p2
if __name__ == '__main__':
import timeit
n=1000000
print(t1(li),t2(li), list(t3(li)))
assert t1(li)==t2(li)==list(t3(li))
print(timeit.timeit("t1(li)", setup="from __main__ import t1, li",number=n))
print(timeit.timeit("t2(li)", setup="from __main__ import t2, li, itertools",number=n))
print(timeit.timeit("list(t3(li))", setup="from __main__ import t3, li",number=n))
Prints:
3.02761888504
4.29241204262
2.12067294121
Which shows they are all roughly the same speed. KennyTM's is fastes, but not by an order of magnitude.