3

我在 sklearn 中练习使用 SGDRegressor,但遇到了一些问题,我将其简化为以下代码。

import numpy as np
from sklearn.linear_model import SGDRegressor

X = np.array([0,0.5,1]).reshape((3,1))
y = np.array([0,0.5,1]).reshape((3,1))

sgd = SGDRegressor()  
sgd.fit(X, y.ravel())

print("intercept=", sgd.intercept_)
print("coef=", sgd.coef_)

这是输出:

intercept= [0.19835632]
coef= [0.18652387]

所有输出都在intercept=0.19 和coef=0.18 左右,但显然正确答案是intercept=0and coef=1。即使在这个简单的例子中,程序也无法得到参数的正确解。我想知道我在哪里犯了错误。

4

2 回答 2

0

With n=10000 data points (draw samples with replacement from your 3 original points) you get the following results with SGD

n = 10000

X = np.random.choice([0,0.5,1], n, replace=True)
y = X

X = X.reshape((n,1))

sgd = SGDRegressor(verbose=1)  
sgd.fit(X, y)

# -- Epoch 1
# Norm: 0.86, NNZs: 1, Bias: 0.076159, T: 10000, Avg. loss: 0.012120
# Total training time: 0.04 seconds.
# -- Epoch 2
# Norm: 0.96, NNZs: 1, Bias: 0.024337, T: 20000, Avg. loss: 0.000586
# Total training time: 0.04 seconds.
# -- Epoch 3
# Norm: 0.98, NNZs: 1, Bias: 0.008826, T: 30000, Avg. loss: 0.000065
# Total training time: 0.04 seconds.
# -- Epoch 4
# Norm: 0.99, NNZs: 1, Bias: 0.003617, T: 40000, Avg. loss: 0.000010
# Total training time: 0.04 seconds.
# -- Epoch 5
# Norm: 1.00, NNZs: 1, Bias: 0.001686, T: 50000, Avg. loss: 0.000002
# Total training time: 0.05 seconds.
# -- Epoch 6
# Norm: 1.00, NNZs: 1, Bias: 0.000911, T: 60000, Avg. loss: 0.000000
# Total training time: 0.05 seconds.
# -- Epoch 7
# Norm: 1.00, NNZs: 1, Bias: 0.000570, T: 70000, Avg. loss: 0.000000
# Total training time: 0.05 seconds.
# Convergence after 7 epochs took 0.05 seconds

print("intercept=", sgd.intercept_)
print("coef=", sgd.coef_)
# intercept= [0.00057032]
# coef= [0.99892893]

plt.plot(X, y, 'r.')
plt.plot(X, sgd.intercept_ + sgd.coef_*X, 'b-')

enter image description here

The following animation shows how SGD regressor starts converging to the correct optima as n goes up in the above code:

enter image description here

于 2021-03-03T19:10:35.530 回答
0

SGD(随机梯度下降)用于大规模数据。对于如此微不足道的数量,我建议您改用简单的线性回归。正如“没有免费午餐定理”所述,没有一个模型适合所有解决方案,因此您应该经常尝试不同的模型以找到最佳模型(但是您还应该了解数据的背景,例如分布类型,多样性因素,偏度等)。请查看以下模型:

from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X,y.ravel())
lr.predict([[0],[0.5],[1]])
# output -> array([1.11022302e-16, 5.00000000e-01, 1.00000000e+00])
于 2021-03-01T06:33:12.123 回答