在 python 中使用 autograd 函数时出现错误“无法区分 wrt 类型”。
基本上,我正在尝试为广义线性模型(GLM)编写代码,并且我想使用 autograd 来获得一个函数,该函数描述了损失函数相对于 w(权重)的导数,然后我将其插入 scipy .optimize.minimize()。
在执行 scipy 步骤之前,我一直在尝试通过输入变量的值(在我的例子中是数组)并打印梯度的值(再次作为数组)作为输出来测试我的函数是否有效。这是我的代码:
def generate_data(n,k,m):
w = np.zeros((k,1)) # make first column of weights all zeros
w[:,[0]] = np.random.randint(-10, high=10,size=(k,m)) # choose length random inputs between -10 and 10
x = np.random.randint(-10, high=10,size=(n,m)) # choose length random inputs between -10 and 10
return x,w
def logpyx(x,w):
p = np.exp(np.dot(x,w.T)) # get exponentials e^wTx
norm = np.sum(p,axis=1) # get normalization constant (sum of exponentials)
pnorm = np.divide(p.T,norm).T # normalize the exponentials
ind = [] # initialize empty list
for n in np.arange(0,len(x)):
ind.append(np.random.choice(len(w),p = pnorm[n,:])) # choose index where y = 1 based on probabilities
ind = np.array(ind) # recast list as array
ys = [] # initialize empty list
for n in np.arange(0,len(x)):
y = [0] * (len(w)-1) # initialize list of zeros
y.insert(ind[n],1) # assign value "1" to appropriate index in row
ys.append(y) # add row to matrix of ys
y = np.array(ys) # recast list as array
pyx = np.diagonal(np.dot(pnorm,y.T)) # p(y|x)
log_pyx = np.log(pyx)
return log_pyx
# input data
n = 100 # number of data points
C = 2 # number of classes (e.g. turn right, turn left, move forward)
m = 1 # number of features in x (e.g. m = 2 for # of left trials and # of right trials)
log_pyx = logpyx(x,w) # calculate log likelihoods
grad_logpyx = grad(logpyx) # take gradient of log_pyx to find updated weights
x,w = generate_data(n,C,m)
print(grad_logpyx(x,w))
所以当我这样做时,一切都运行良好,直到最后一行,我得到前面提到的错误。
我显然不明白如何很好地使用 autograd,而且我必须以错误的格式放置一些东西,因为错误似乎与数据类型不匹配有关。任何帮助将不胜感激!