我有一个奇怪的问题。我运行了以下模型,其中包括作为预测变量之一的“Valence.c”。这是编码为“0”或“1”的预测变量,代表“正”和“负”。预测变量居中,因此实际上是“-0.5”和“0.5”。
> loss.1 <- glmer.nb(Loss_across.Chain ~ Posn.c*Valence.c + (Valence.c|mood.c/Chain), data = FinalData_forpoisson, control = glmerControl(optimizer = "bobyqa", check.conv.grad = .makeCC("warning", 0.05)))
我得到以下输出:
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: Negative Binomial(4.9852) ( log )
Formula: Loss_across.Chain ~ Posn.c * Valence.c + (Valence.c | mood.c/Chain)
Data: FinalData_forpoisson
Control: ..3
AIC BIC logLik deviance df.resid
1894.7 1945.3 -936.4 1872.7 725
Scaled residuals:
Min 1Q Median 3Q Max
-1.3882 -0.7225 -0.5190 0.4375 7.1873
Random effects:
Groups Name Variance Std.Dev. Corr
Chain:mood.c (Intercept) 8.782e-15 9.371e-08
Valence.c 9.608e-15 9.802e-08 0.48
mood.c (Intercept) 0.000e+00 0.000e+00
Valence.c 1.654e-14 1.286e-07 NaN
Number of obs: 736, groups: Chain:mood.c, 92; mood.c, 2
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.19255 0.04794 -4.016 5.92e-05 ***
Posn.c -0.61011 0.04122 -14.800 < 2e-16 ***
Valence.c -0.27372 0.09589 -2.855 0.00431 **
Posn.c:Valence.c 0.38043 0.08245 4.614 3.95e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) Posn.c Vlnc.c
Posn.c 0.491
Valence.c 0.029 -0.090
Psn.c:Vlnc. -0.090 0.062 0.491
由于 Valence.c 的固定效应是负数,我想我会尝试重新编码变量,使正数现在是“0.5”,负数现在是“-0.5”。我认为解释事故率的增加比解释事故率的下降更容易。所以我运行了这个相同的模型,除了它调用的数据文件具有相反的编码:
> loss.2 <- glmer.nb(Loss_across.Chain ~ Posn.c*Valence.c + (Valence.c|mood.c/Chain), data = LossAnalysis_ValenceCodingReversed, control = glmerControl(optimizer = "bobyqa", check.conv.grad = .makeCC("warning", 0.05)))
我收到此警告消息:
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
unable to evaluate scaled gradient
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge: degenerate Hessian with 1 negative eigenvalues
为什么更改参考组意味着模型现在无法收敛?我对正面和负面的观察次数相同。任何帮助都会很棒!
谢谢