5

我正在学习gradient descent计算系数。以下是我正在做的事情:

#!/usr/bin/Python

 import numpy as np


   # m denotes the number of examples here, not the number of features
 def gradientDescent(x, y, theta, alpha, m, numIterations):
     xTrans = x.transpose()
     for i in range(0, numIterations):
        hypothesis = np.dot(x, theta)
        loss = hypothesis - y
        # avg cost per example (the 2 in 2*m doesn't really matter here.
        # But to be consistent with the gradient, I include it)
        cost = np.sum(loss ** 2) / (2 * m)
        #print("Iteration %d | Cost: %f" % (i, cost))
        # avg gradient per example
        gradient = np.dot(xTrans, loss) / m
        # update
        theta = theta - alpha * gradient
     return theta

 X = np.array([41.9,43.4,43.9,44.5,47.3,47.5,47.9,50.2,52.8,53.2,56.7,57.0,63.5,65.3,71.1,77.0,77.8])
 y = np.array([251.3,251.3,248.3,267.5,273.0,276.5,270.3,274.9,285.0,290.0,297.0,302.5,304.5,309.3,321.7,330.7,349.0])
 n = np.max(X.shape)
 x = np.vstack([np.ones(n), X]).T      
 m, n = np.shape(x)
 numIterations= 100000
 alpha = 0.0005
 theta = np.ones(n)
 theta = gradientDescent(x, y, theta, alpha, m, numIterations)
 print(theta)

现在我上面的代码工作正常。如果我现在尝试多个变量并替换XX1如下所示:

  X1 = np.array([[41.9,43.4,43.9,44.5,47.3,47.5,47.9,50.2,52.8,53.2,56.7,57.0,63.5,65.3,71.1,77.0,77.8], [29.1,29.3,29.5,29.7,29.9,30.3,30.5,30.7,30.8,30.9,31.5,31.7,31.9,32.0,32.1,32.5,32.9]])

然后我的代码失败并显示以下错误:

  JustTestingSGD.py:14: RuntimeWarning: overflow encountered in square
  cost = np.sum(loss ** 2) / (2 * m)
  JustTestingSGD.py:19: RuntimeWarning: invalid value encountered in subtract
  theta = theta - alpha * gradient
  [ nan  nan  nan]

谁能告诉我如何gradient descent使用X1?我使用的预期输出X1是:

[-153.5 1.24 12.08]

我也对其他 Python 实现持开放态度。我只想要coefficients (also called thetas)forX1y.

4

1 回答 1

3

问题在于您的算法没有收敛。相反,它发散了。第一个错误:

JustTestingSGD.py:14: RuntimeWarning: overflow encountered in square
cost = np.sum(loss ** 2) / (2 * m)

来自于在某些时候不可能计算某物的平方的问题,因为 64 位浮点数不能容纳该数字(即它> 10 ^ 309)。

JustTestingSGD.py:19: RuntimeWarning: invalid value encountered in subtract
theta = theta - alpha * gradient

这只是之前错误的结果。这些数字对于计算是不合理的。

您实际上可以通过取消注释调试打印行来查看差异。由于没有收敛,成本开始增加。

如果您使用X1较小的 alpha 值尝试您的函数,它会收敛。

于 2014-06-25T20:10:54.143 回答