我正在自学 ML,当我尝试在 python 中编写逻辑回归时出现错误。这是来自斯坦福在线课程。我尝试了很多次,包括将 grad 更改为 grad.ravel()/grad.fatten(),但都没有奏效。
输入:
import numpy as np
data=np.loadtxt(r'E:\ML\machine-learning-ex2\ex2\ex2data1.txt',delimiter=',')
X=data[:,:2]
y=data[:,2].reshape(-1,1)
def sigmoid(z):
return 1/(np.exp(-1*z)+1)
def costFunction(theta,X,y):
m=len(y)
h=sigmoid(np.dot(X,theta))
J=-1/m*np.sum((np.dot(y.T,np.log(h))+np.dot((1-y).T,np.log(1-h))))
grad=1/m*np.dot(X.T,(h-y))
return J,grad
m,n=np.shape(X)
X=np.hstack((np.ones([m,1]),X))
initial_theta=np.zeros([n+1,1])
import scipy.optimize as opt
result = opt.fmin_tnc(func=costFunction, x0=initial_theta, args=(X, y))
输出:
ValueError:
---> 25 result = opt.fmin_tnc(func=costFunction, x0=initial_theta, args=(X, y))
ValueError: tnc: invalid gradient vector from minimized function.