您无法使用单个 MGET 检索如此多的值。此命令并非旨在维持此类工作量。生成非常大的 Redis 命令是错误的想法:
如果你想检索大量数据,你应该管道化几个 GET 或 MGET 命令。例如,以下代码可用于检索任意数量的项目,同时最大限度地减少往返次数和服务器端 CPU 消耗:
import redis
N_PIPE = 50 # number of MGET commands per pipeline execution
N_MGET = 20 # number of keys per MGET command
# Return a dictionary from the input array containing the keys
def massive_get( r, array ):
res = {}
pipe = r.pipeline(transaction=False)
i = 0
while i < len(array):
keys = []
for n in range(0,N_PIPE):
k = array[i:i+N_MGET]
keys.append( k )
pipe.mget( k )
i += N_MGET
if i>=len(array):
break
for k,v in zip( keys, pipe.execute() ):
res.update( dict(zip(k,v)) )
return res
# Example: retrieve all keys from 0 to 1022:
pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
r = redis.Redis(connection_pool=pool)
array = range(0,1023)
print massive_get(r,array)