2

由于它使用 sigmoid 函数而不是零/一激活函数,我猜这是计算梯度下降的正确方法,对吗?

  static double calculateOutput( int theta, double weights[], double[][] feature_matrix, int file_index, int globo_dict_size )
  {
     //double sum = x * weights[0] + y * weights[1] + z * weights[2] + weights[3];
     double sum = 0.0;

     for (int i = 0; i < globo_dict_size; i++) 
     {
         sum += ( weights[i] * feature_matrix[file_index][i] );
     }
     //bias
     sum += weights[ globo_dict_size ];

     return sigmoid(sum);
  }

  private static double sigmoid(double x)
  {
      return 1 / (1 + Math.exp(-x));
  }

下面的代码我正在尝试更新我的 Θ 值(相当于感知器中的权重,不是吗?),我在相关问题LEARNING_RATE * localError * feature_matrix__train[p][i] * output_gradient[i]中为此目的给出了这个公式。我从我的感知器中注释掉了重量更新。

这个新的更新规则是正确的方法吗?

output_gradient 是什么意思?这相当于我在我的calculateOutput方法中计算的总和吗?

      //LEARNING WEIGHTS
      double localError, globalError;
      int p, iteration, output;

      iteration = 0;
      do 
      {
          iteration++;
          globalError = 0;
          //loop through all instances (complete one epoch)
          for (p = 0; p < number_of_files__train; p++) 
          {
              // calculate predicted class
              output = calculateOutput( theta, weights, feature_matrix__train, p, globo_dict_size );
              // difference between predicted and actual class values
              localError = outputs__train[p] - output;
              //update weights and bias
              for (int i = 0; i < globo_dict_size; i++) 
              {
                  //weights[i] += ( LEARNING_RATE * localError * feature_matrix__train[p][i] );

                  weights[i] += LEARNING_RATE * localError * feature_matrix__train[p][i] * output_gradient[i]

              }
              weights[ globo_dict_size ] += ( LEARNING_RATE * localError );

              //summation of squared error (error value for all instances)
              globalError += (localError*localError);
          }

          /* Root Mean Squared Error */
          if (iteration < 10) 
              System.out.println("Iteration 0" + iteration + " : RMSE = " + Math.sqrt( globalError/number_of_files__train ) );
          else
              System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt( globalError/number_of_files__train ) );
          //System.out.println( Arrays.toString( weights ) );
      } 
      while(globalError != 0 && iteration<=MAX_ITER);

更新 现在我更新了一些东西,看起来更像这样:

  double loss, cost, hypothesis, gradient;
  int p, iteration;

  iteration = 0;
  do 
  {
    iteration++;
    cost = 0.0;
    loss = 0.0;

    //loop through all instances (complete one epoch)
    for (p = 0; p < number_of_files__train; p++) 
    {

      // 1. Calculate the hypothesis h = X * theta
      hypothesis = calculateHypothesis( theta, feature_matrix__train, p, globo_dict_size );

      // 2. Calculate the loss = h - y and maybe the squared cost (loss^2)/2m
      loss = hypothesis - outputs__train[p];

      // 3. Calculate the gradient = X' * loss / m
      gradient = calculateGradent( theta, feature_matrix__train, p, globo_dict_size, loss );

      // 4. Update the parameters theta = theta - alpha * gradient
      for (int i = 0; i < globo_dict_size; i++) 
      {
          theta[i] = theta[i] - (LEARNING_RATE * gradient);
      }

    }

    //summation of squared error (error value for all instances)
    cost += (loss*loss);


  /* Root Mean Squared Error */
  if (iteration < 10) 
      System.out.println("Iteration 0" + iteration + " : RMSE = " + Math.sqrt( cost/number_of_files__train ) );
  else
      System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt( cost/number_of_files__train ) );
  //System.out.println( Arrays.toString( weights ) );

  } 
  while(cost != 0 && iteration<=MAX_ITER);


}

static double calculateHypothesis( double theta[], double[][] feature_matrix, int file_index, int globo_dict_size )
{
    double hypothesis = 0.0;

     for (int i = 0; i < globo_dict_size; i++) 
     {
         hypothesis += ( theta[i] * feature_matrix[file_index][i] );
     }
     //bias
     hypothesis += theta[ globo_dict_size ];

     return hypothesis;
}

static double calculateGradent( double theta[], double[][] feature_matrix, int file_index, int globo_dict_size, double loss )
{
    double gradient = 0.0;

     for (int i = 0; i < globo_dict_size; i++) 
     {
         gradient += ( feature_matrix[file_index][i] * loss);
     }

     return gradient;
}

public static double hingeLoss()
{
    // l(y, f(x)) = max(0, 1 − y · f(x))

    return HINGE;
}
4

1 回答 1

1

你的calculateOutput方法看起来是正确的。你的下一段代码我真的不这么认为:

weights[i] += LEARNING_RATE * localError * feature_matrix__train[p][i] * output_gradient[i]

查看您在其他问题中发布的图片:

Theta 的更新规则

让我们尝试在代码中识别这些规则的每一部分。

  1. Theta0 andTheta1:看起来像weights[i]你的代码;我希望globo_dict_size = 2

  2. alpha: 好像是你的LEARNING_RATE;

  3. 1 / m: 我在你的更新规则中找不到这个。m是 Andrew Ng 视频中的训练实例数。1 / number_of_files__train就您而言,我认为应该是;不过,这不是很重要,即使没有它,事情也应该可以正常工作。

  4. 总和:你用calculateOutput函数来做这个,你在localError变量中使用它的结果,你乘以feature_matrix__train[p][i](相当于x(i)Andrew Ng 的符号)。

    这部分是你的偏导数,也是梯度的一部分!

    为什么?因为关于 的偏导数[h_theta(x(i)) - y(i)]^2等于Theta0

    2*[h_theta(x(i)) - y(i)] * derivative[h_theta(x(i)) - y(i)]
    derivative[h_theta(x(i)) - y(i)] =
    derivative[Theta0 * x(i, 1) + Theta1*x(i, 2) - y(i)] =
    x(i, 1)
    

    当然,您应该推导出整个总和。这也是 Andrew Ng 用于1 / (2m)成本函数的原因,因此会与我们从推导中得到的2相抵消。2

    请记住x(i, 1),或者x(1)应该由所有的组成。在您的代码中,您应该确保:

    feature_matrix__train[p][0] == 1
    
  5. 而已!我不知道output_gradient[i]你的代码应该是什么,你没有在任何地方定义它。

我建议您看一下本教程,以更好地了解您使用的算法。由于您使用 sigmoid 函数,因此您似乎想要进行分类,但是您应该使用不同的成本函数。该文件也涉及逻辑回归。

于 2015-03-08T10:36:41.000 回答