我正在尝试开发随机梯度下降,但我不知道它是否 100% 正确。
- 我的随机梯度下降算法产生的成本有时与 FMINUC 或批量梯度下降产生的成本相差甚远。
- 虽然当我将学习率 alpha 设置为 0.2 时,批量梯度下降成本会收敛,但我不得不将学习率 alpha 设置为 0.0001,以便我的随机实现不会发散。这是正常的吗?
这是我使用 10,000 个元素和 num_iter = 100 或 500 的训练集获得的一些结果
FMINUC :
Iteration #100 | Cost: 5.147056e-001
BACTH GRADIENT DESCENT 500 ITER
Iteration #500 - Cost = 5.535241e-001
STOCHASTIC GRADIENT DESCENT 100 ITER
Iteration #100 - Cost = 5.683117e-001 % First time I launched
Iteration #100 - Cost = 7.047196e-001 % Second time I launched
逻辑回归的梯度下降实现
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
[J, gradJ] = lrCostFunction(theta, X, y, lambda);
theta = theta - alpha * gradJ;
J_history(iter) = J;
fprintf('Iteration #%d - Cost = %d... \r\n',iter, J_history(iter));
end
逻辑回归的随机梯度下降实现
% number of training examples
m = length(y);
% STEP1 : we shuffle the data
data = [y, X];
data = data(randperm(size(data,1)),:);
y = data(:,1);
X = data(:,2:end);
for iter = 1:num_iters
for i = 1:m
x = X(i,:); % Select one example
[J, gradJ] = lrCostFunction(theta, x, y(i,:), lambda);
theta = theta - alpha * gradJ;
end
J_history(iter) = J;
fprintf('Iteration #%d - Cost = %d... \r\n',iter, J);
end
作为参考,这是我的示例中使用的逻辑回归成本函数
function [J, grad] = lrCostFunction(theta, X, y, lambda)
m = length(y); % number of training examples
% We calculate J
hypothesis = sigmoid(X*theta);
costFun = (-y.*log(hypothesis) - (1-y).*log(1-hypothesis));
J = (1/m) * sum(costFun) + (lambda/(2*m))*sum(theta(2:length(theta)).^2);
% We calculate grad using the partial derivatives
beta = (hypothesis-y);
grad = (1/m)*(X'*beta);
temp = theta;
temp(1) = 0; % because we don't add anything for j = 0
grad = grad + (lambda/m)*temp;
grad = grad(:);
end