21

我正在使用 logit 模型估计随机效应glmer,并且我想报告自变量的边际效应。对于glm模型,包mfx有助于计算边际效应。对象是否有任何包或功能glmer

谢谢你的帮助。

下面给出了一个可重现的例子

## mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
## as of 2020-08-24:
mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
mydata$rank <- factor(mydata$rank) #creating ranks
id <- rep(1:ceiling(nrow(mydata)/2), times=c(2)) #creating ID variable
mydata <- cbind(mydata,data.frame(id,stringsAsFactors=FALSE)) 
set.seed(12345)
mydata$ran <- runif(nrow(mydata),0,1) #creating a random variable

library(lme4)
cfelr <- glmer(admit ~ (1 | id) + rank + gpa + ran + gre, data=mydata ,family = binomial)
summary(cfelr)
4

4 回答 4

4

您可以使用ggeffects-packagepackage-vignettes中的示例)。因此,对于您的代码,这可能如下所示:

library(ggeffects)
# dat is a data frame with marginal effects
dat <- ggpredict(cfelr, term = "rank")
plot(dat)

或者您使用,如本杰明所述,您可以使用sjPlot-package,使用plot_model()带有 plot-type 的函数"pred"(这只是为边际效应图包装了 ggeffects 包):

library(sjPlot)
plot_model(cfelr, type = "pred", term = "rank")
于 2017-10-23T14:46:44.970 回答
4

这是一个技术含量较低的答案,但可能提供了有用的资源。我是该sjPlot软件包的粉丝,它提供了 glmer 对象的边际效应图,如下所示:

library(sjPlot)
sjp.glmer(cfelr, type = "eff")

该软件包还提供了许多用于探索 glmer 模型的固定和随机效果的选项。https://github.com/strengejacke/sjPlot

于 2017-10-21T22:38:17.473 回答
4

这是使用该margins()包的一种方法:

library(margins)
library(lme4)

gm1 <- glmer(cbind(incidence, size - incidence) ~ period +
                 (1 | herd),
             data = cbpp,
             family = binomial)

m <- margins(gm1, data = cbpp)
m
于 2019-04-11T08:22:26.710 回答
1

My solution does not answer the question,

"Is there a way of getting “marginal effects” from a glmer object",

but rather,

"Is there a way of getting marginal logistic regression coefficients from a conditional logistic regression with one random intercept?"

I am only offering this write-up because the reproducible example provided was a conditional logistic regression with one random intercept and I'm intending to be helpful. Please do not downvote; I will take down if this answer is deemed too off topic.

The R-code is based on the work of Patrick Heagerty (click "View Raw" to see pdf), and I include a reproducible example below from my github version of his lnMLE package (excuse the warnings at installation -- I'm shoehorning Patrick's non-CRAN package). I'm omitting the output for all except the last line, compare, which shows the fixed effect coefficients side-by-side.

library(devtools)
install_github("lnMLE_1.0-2", "swihart")
library(lnMLE)
## run the example from the logit.normal.mle help page
## see also the accompanying document (click 'View Raw' on page below:)
## https://github.com/swihart/lnMLE_1.0-2/blob/master/inst/doc/lnMLEhelp.pdf
data(eye_race)
attach(eye_race)
marg_model <- logit.normal.mle(meanmodel = value ~ black,
                           logSigma= ~1,
                           id=eye_race$id,
                           model="marginal",
                           data=eye_race,
                           tol=1e-5,
                           maxits=100,
                           r=50)
marg_model
cond_model <- logit.normal.mle(meanmodel = value ~ black,
                           logSigma= ~1,
                           id=eye_race$id,
                           model="conditional",
                           data=eye_race,
                           tol=1e-5,
                           maxits=100,
                           r=50)
cond_model
compare<-round(cbind(marg_model$beta, cond_model$beta),2)
colnames(compare)<-c("Marginal", "Conditional")
compare

The output of the last line:

compare

            Marginal Conditional

(Intercept)    -2.43       -4.94

black           0.08        0.15

I attempted the reproducible example given, but had problems with both the glmer and lnMLE implementations; again I only include output pertaining to the comparison results and the warnings from the glmer() call:

##original question / answer... glmer() function gave a warning and the lnMLE did not fit well...
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
mydata$rank <- factor(mydata$rank) #creating ranks
id <- rep(1:ceiling(nrow(mydata)/2), times=c(2)) #creating ID variable
mydata <- cbind(mydata,data.frame(id,stringsAsFactors=FALSE))
set.seed(12345)
mydata$ran <- runif(nrow(mydata),0,1) #creating a random variable

library(lme4)
cfelr <- glmer(admit ~ (1 | id) + rank + gpa + ran + gre, 
               data=mydata,
               family = binomial)

Which gave:

Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge with max|grad| = 0.00161047 (tol = 0.001, component 2)
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model is nearly unidentifiable: very large eigenvalue
 - Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
 - Rescale variables?

but I foolishly went on without rescaling, trying to apply the logit.normal.mle to the example given. However, the conditional model doesn't converge or produce standard error estimates,

summary(cfelr)
library(devtools)
install_github("lnMLE_1.0-2", "swihart")
library(lnMLE)

mydata$rank2 = mydata$rank==2
mydata$rank3 = mydata$rank==3
mydata$rank4 = mydata$rank==4

cfelr_cond =  logit.normal.mle(meanmodel = admit ~ rank2+rank3+rank4+gpa+ran+gre, 
                               logSigma = ~1 , 
                               id=id, 
                               model="conditional", 
                               data=mydata, 
                               r=50, 
                               tol=1e-6, 
                               maxits=500)
cfelr_cond


cfelr_marg =  logit.normal.mle(meanmodel = admit ~ rank2+rank3+rank4+gpa+ran+gre,
                               logSigma = ~1 , 
                               id=id, 
                               model="marginal", 
                               data=mydata, 
                               r=50, 
                               tol=1e-6, 
                               maxits=500)
cfelr_marg


compare_glmer<-round(cbind(cfelr_marg$beta, cfelr_cond$beta,summary(cfelr)$coeff[,"Estimate"]),3)
colnames(compare_glmer)<-c("Marginal", "Conditional","glmer() Conditional")
compare_glmer

The last line of which reveals that the conditional model from cfelr_cond did not evaluate a conditional model but just returned the marginal coefficients and no standard errors.

>     compare_glmer

            Marginal Conditional glmer() Conditional

(Intercept)   -4.407      -4.407              -4.425

rank2         -0.667      -0.667              -0.680

rank3         -1.832      -1.833              -1.418

rank4         -1.930      -1.930              -1.585

gpa            0.547       0.548               0.869

ran            0.860       0.860               0.413

gre            0.004       0.004               0.002

I hope to iron out these issues. Any help/comments appreciated. I'll give status updates when I can.

于 2014-08-28T17:43:28.050 回答