我正在使用 Python 来解决 Project Euler 问题。许多需要缓存过去计算的结果以提高性能,导致代码如下:
pastResults = [None] * 1000000
def someCalculation(integerArgument):
# return result of a calculation performed on numberArgument
# for example, summing the factorial or square of its digits
for eachNumber in range(1, 1000001)
if pastResults[eachNumber - 1] is None:
pastResults[eachNumber - 1] = someCalculation(eachNumber)
# perform additional actions with pastResults[eachNumber - 1]
反复递减是否会对程序性能产生不利影响?有一个空的或虚拟的第零元素(因此从零开始的数组模拟从一开始的数组)会通过消除重复递减来提高性能吗?
pastResults = [None] * 1000001
def someCalculation(integerArgument):
# return result of a calculation performed on numberArgument
# for example, summing the factorial or square of its digits
for eachNumber in range(1, 1000001)
if pastResults[eachNumber] is None:
pastResults[eachNumber] = someCalculation(eachNumber)
# perform additional actions with pastResults[eachNumber]
我也觉得模拟一个基于 1 的数组会使代码更容易理解。这就是为什么我不将范围从零for eachNumber in range(1000000)
设为someCalculation(eachNumber + 1)
不合逻辑的原因。
来自第零个空元素的额外内存有多重要?我还应该考虑哪些其他因素?我更喜欢不限于 Python 和 Project Euler 的答案。
编辑:应该is None
代替is not None
.