我正在研究 numpy 和 cupy 之间的差异,并注意到在我创建的这两个类似程序中,cupy 版本要慢得多,尽管它是在 GPU 上运行的。
这是 numpy 版本:
import time
import numpy as np
size = 5000
upperBound = 20
dataSet = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
dataLength = np.random.randint(0, high=upperBound, size=size, dtype='l')
randomNumber = np.random.randint(0, high=62, size=size * upperBound, dtype='l')
count = 0
dataCount = 0
start_time = time.time()
for i in range(size):
lineData = ""
for j in range(dataLength[i]):
lineData = lineData + dataSet[randomNumber[count]]
count = count + 1
print(lineData)
dataCount = dataCount + 1
time = str(time.time() - start_time)
print("------------------------\n" + "It took this many sedonds: " + time)
print("There were " + str(dataCount) + " many data generations.")
这是杯子版本:
import time
import cupy as cp
size = 5000
upperBound = 20
dataSet = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
dataLength = cp.random.randint(0, high=upperBound, size= size,dtype='l')
randomNumber = cp.random.randint(0, high=62, size= upperBound * size,dtype='l')
count = 0
dataCount = 0
start_time = time.time()
for i in range(size):
lineData = ""
for j in range(int(dataLength[i])):
lineData = lineData + str(dataSet[int(randomNumber[count])])
count = count + 1
print(lineData)
dataCount = dataCount + 1
time = str(time.time() - start_time)
print("-------------------\n" +"It took this many seconds: " + time)
print("There were " + str(dataCount) + " many data generations.")
它们本质上是相同的代码,除了一个使用 numpy 而另一个使用 cupy。由于 GPU 的使用,我期望 cupy 执行得更快,但事实并非如此。numpy 的运行时间为:0.032。而 cupy 的运行时间是:0.484。