1

手动执行岭回归时,如定义

solve(t(X) %*% X + lbd*I) %*%t(X) %*% y

我得到的结果与由 计算的结果不同MASS::lm.ridge。为什么?对于普通的线性回归,手动方法(计算伪逆)工作正常。

这是我最小的、可重现的示例:

library(tidyverse)

ridgeRegression = function(X, y, lbd) {
  Rinv = solve(t(X) %*% X + lbd*diag(ncol(X)))
  t(Rinv %*% t(X) %*% y)
}

# generate some data:
set.seed(0)
tb1 = tibble(
  x0 = 1,
  x1 = seq(-1, 1, by=.01),
  x2 = x1 + rnorm(length(x1), 0, .1),
  y  = x1 + x2 + rnorm(length(x1), 0, .5)
)
X = as.matrix(tb1 %>% select(x0, x1, x2))

# sanity check: force ordinary linear regression
# and compare it with the built-in linear regression:
ridgeRegression(X, tb1$y, 0) - coef(summary(lm(y ~ x1 + x2, data=tb1)))[, 1]
# looks the same: -2.94903e-17 1.487699e-14 -2.176037e-14

# compare manual ridge regression to MASS ridge regression:
ridgeRegression(X, tb1$y, 10) - coef(MASS::lm.ridge(y ~ x0 + x1 + x2 - 1, data=tb1, lambda = 10))
# noticeably different: -0.0001407148 0.003689412 -0.08905392
4

1 回答 1

3

MASS::lm.ridge 在建模之前缩放数据 - 这解释了系数的差异。

您可以通过在 R 控制台中键入 MASS::lm.ridge 检查功能代码来确认这一点。

这是 lm.ridge 函数,其中缩放部分被注释掉了:

X = as.matrix(tb1 %>% select(x0, x1, x2))
n <- nrow(X); p <- ncol(X)
#Xscale <- drop(rep(1/n, n) %*% X^2)^0.5
#X <- X/rep(Xscale, rep(n, p))
Xs <- svd(X)
rhs <- t(Xs$u) %*% tb1$y
d <- Xs$d
lscoef <-  Xs$v %*% (rhs/d)
lsfit <- X %*% lscoef
resid <- tb1$y - lsfit
s2 <- sum(resid^2)/(n - p)
HKB <- (p-2)*s2/sum(lscoef^2)
LW <- (p-2)*s2*n/sum(lsfit^2)
k <- 1
dx <- length(d)
div <- d^2 + rep(10, rep(dx,k))
a <- drop(d*rhs)/div
dim(a) <- c(dx, k)
coef <- Xs$v %*% a
coef
#             x0        x1        x2
#[1,] 0.01384984 0.8667353 0.9452382

于 2020-02-13T17:38:04.287 回答