这是基于@stx2 想法的解决方案。一个潜在的问题是当变大beta**N
时可能会导致浮点溢出(与 相同)。N
cumprod
>>> def garchModel2(e2, omega=0.01, beta=0.1, gamma=0.8):
wt0=cumprod(array([beta,]*(len(e2)-1)))
wt1=cumsum(hstack((0.,wt0)))+1
wt2=hstack((wt0[::-1], 1.))*gamma
wt3=hstack((1, wt0))[::-1]*beta
pt1=hstack((0.,(array(e2)*wt2)[:-1]))
pt2=wt1*omega
return cumsum(pt1)/wt3+pt2
>>> garchModel([1,2,3,4,5])
array([ 0.01 , 0.811 , 1.6911 , 2.57911 , 3.467911])
>>> garchModel2([1,2,3,4,5])
array([ 0.01 , 0.811 , 1.6911 , 2.57911 , 3.467911])
>>> f1=lambda: garchModel2(range(5))
>>> f=lambda: garchModel(range(5))
>>> T=timeit.Timer('f()', 'from __main__ import f')
>>> T1=timeit.Timer('f1()', 'from __main__ import f1')
>>> T.timeit(1000)
0.01588106868331031
>>> T1.timeit(1000) #When e2 dimension is samll, garchModel2 is slower
0.04536693909403766
>>> f1=lambda: garchModel2(range(10000))
>>> f=lambda: garchModel(range(10000))
>>> T.timeit(1000)
35.745981961394534
>>> T1.timeit(1000) #When e2 dimension is large, garchModel2 is faster
1.7330512676890066
>>> f1=lambda: garchModel2(range(1000000))
>>> f=lambda: garchModel(range(1000000))
>>> T.timeit(50)
167.33835501439427
>>> T1.timeit(50) #The difference is even bigger.
8.587259274572716
我没有使用beta**N
,而是cumprod
。**
可能会减慢很多。