我使用下面的公式作为我的假设:
并将下面的公式作为成本函数:
所以我试图最小化的对象函数是:
梯度是:
csv 文件的格式如下: y0,x1,x2,x3,... y1,x1,x2,x3,... y2,x1,x2,x3,... y 为 1 或 0(用于分类)培训代码如下:
import numpy as np
import scipy as sp
from scipy.optimize import fmin_bfgs
import pylab as pl
data = np.genfromtxt('../data/small_train.txt', delimiter=',')
y = data[:,0]
#add 1 as the first column of x, the constant term
x = np.append(np.ones((len(y), 1)), data[:,1:], axis = 1)
#sigmoid hypothesis
def h(theta, x):
return 1.0/(1+np.exp(-np.dot(theta, x)))
#cost function
def cost(theta, x, y):
tot = 0
for i in range(len(y)):
tot += y[i]*np.log(h(theta, x[i])) + (1-y[i])*(1-np.log(h(theta, x[i])))
return -tot / len(y)
#gradient
def deviation(theta, x, y):
def f(theta, x, y, j):
tot = 0.0
for i in range(len(y)):
tot += (h(theta, x[i]) - y[i]) * x[i][j]
return tot / len(y)
ret = []
for j in range(len(x[0])):
ret.append(f(theta, x, y, j))
return np.array(ret)
init_theta = np.zeros(len(x[0]))
ret = fmin_bfgs(cost, init_theta, fprime = deviation, args=(x,y))
print ret
我在一个小数据集上运行代码,但似乎我的实现不正确。有人可以帮助我吗?还有一个问题:如您所知,fmin_bfgs 不一定需要 fprime 项,如果我们提供它和不提供它有什么区别?