对于我的一个课程,我最近遇到了使用 miller-rabin 算法来识别 20 到 29000 之间的素数的 ruby 和 python 实现。我很好奇为什么,即使它们看起来是相同的实现,python代码运行得更快。我已经读到 python 通常比 ruby 快,但这是可以预料到的速度差异吗?
miller_rabin.rb
def miller_rabin(m,k)
t = (m-1)/2;
s = 1;
while(t%2==0)
t/=2
s+=1
end
for r in (0...k)
b = 0
b = rand(m) while b==0
prime = false
y = (b**t) % m
if(y ==1)
prime = true
end
for i in (0...s)
if y == (m-1)
prime = true
break
else
y = (y*y) % m
end
end
if not prime
return false
end
end
return true
end
count = 0
for j in (20..29000)
if(j%2==1 and miller_rabin(j,2))
count+=1
end
end
puts count
miller_rabin.py:
import math
import random
def miller_rabin(m, k):
s=1
t = (m-1)/2
while t%2 == 0:
t /= 2
s += 1
for r in range(0,k):
rand_num = random.randint(1,m-1)
y = pow(rand_num, t, m)
prime = False
if (y == 1):
prime = True
for i in range(0,s):
if (y == m-1):
prime = True
break
else:
y = (y*y)%m
if not prime:
return False
return True
count = 0
for j in range(20,29001):
if j%2==1 and miller_rabin(j,2):
count+=1
print count
当我在 Windows Powershell 中使用 Measure-Command 测量每个的执行时间时,我得到以下信息:
Python 2.7:
滴答声:4874403
总毫秒数:487.4403
Ruby 1.9.3:
滴答声:682232430
总毫秒数:68223.243
I would appreciate any insight anyone can give me into why their is such a huge difference