3

这有点长镜头,但我想知道是否有人可以看看这个。我在这里正确地为线性回归进行批量梯度下降吗?它给出了单个独立变量和截距的预期答案,但没有给出多个独立变量的预期答案。

/**
 * (using Colt Matrix library)
 * @param alpha Learning Rate
 * @param thetas Current Thetas
 * @param independent 
 * @param dependent
 * @return new Thetas
 */
public DoubleMatrix1D descent(double         alpha,
                              DoubleMatrix1D thetas,
                              DoubleMatrix2D independent,
                              DoubleMatrix1D dependent ) {
    Algebra algebra     = new Algebra();

    // ALPHA*(1/M) in one.
    double  modifier    = alpha / (double)independent.rows();

    //I think this can just skip the transpose of theta.
    //This is the result of every Xi run through the theta (hypothesis fn)
    //So each Xj feature is multiplied by its Theata, to get the results of the hypothesis
    DoubleMatrix1D hypothesies = algebra.mult( independent, thetas );

    //hypothesis - Y  
    //Now we have for each Xi, the difference between predictect by the hypothesis and the actual Yi
    hypothesies.assign(dependent, Functions.minus);

    //Transpose Examples(MxN) to NxM so we can matrix multiply by hypothesis Nx1
    DoubleMatrix2D transposed = algebra.transpose(independent);

    DoubleMatrix1D deltas     = algebra.mult(transposed, hypothesies );


    // Scale the deltas by 1/m and learning rate alhpa.  (alpha/m)
    deltas.assign(Functions.mult(modifier));

    //Theta = Theta - Deltas
    thetas.assign( deltas, Functions.minus );

    return( thetas );
}
4

2 回答 2

1

您的实施没有任何问题,并且根据您的评论collinearity,您在生成x2. 这在回归估计中是有问题的。

要测试您的算法,您可以生成两个独立的随机数列。分别选择和的值w0,即w1和的w2系数。计算从属值。interceptx1x2y

然后看看你的随机/批量梯度体面算法是否可以恢复w0,w1w2

于 2013-02-19T15:29:45.977 回答
0

我认为添加

  // ALPHA*(1/M) in one.
double  modifier    = alpha / (double)independent.rows();

是个坏主意,因为您将梯度函数与梯度下降算法混合在一起,所以最好在公共方法中使用梯度下降算法,例如 Java 中的以下方法:

import org.la4j.Matrix;
import org.la4j.Vector;

public Vector gradientDescent(Matrix x, Matrix y, int kmax, double alpha)
{
    int k=1;
    Vector  thetas = Vector.fromArray(new double[] { 0.0, 0.0});
    while (k<kmax)
    {
        thetas = thetas.subtract(gradient(x, y, thetas).multiply(alpha));
        k++;
    }
    return thetas;
}
于 2018-08-16T21:29:11.210 回答