这是在另一个线程中开始的问题的延续。
我使用 sklearn 使用类似于以下代码的代码运行了逻辑回归:
from pandas import *
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import linear_model
vect= CountVectorizer(binary =True)
a = read_table('text.tsv', sep='\t', index_col=False)
X = vect.fit_transform(c['text'].values)
logreg = linear_model.LogisticRegression(C=1)
d = logreg.fit(X, c['label'])
d.coef_
现在我想将 d.coef_ 中的值链接到构成我的稀疏矩阵 X 中的行的唯一项。这样做的正确方法是什么?似乎无法让它工作,即使看起来 X 应该有一个 words_ 属性。我得到:
In [48]: X.vocabulary_
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-48-138ab7dd95ed> in <module>()
----> 1 X.vocabulary_
/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/scipy/sparse/base.pyc in __getattr__(self, attr)
497 return self.getnnz()
498 else:
--> 499 raise AttributeError(attr + " not found")
500
501 def transpose(self):
AttributeError: vocabulary_ not found
更进一步,如果我想获得这些系数的统计显着性和置信区间(沿着你从 R 的 glm 获得的内容),这可能吗?例如,
##
## Call:
## glm(formula = admit ~ gre + gpa + rank, family = "binomial",
## data = mydata)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.627 -0.866 -0.639 1.149 2.079
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.98998 1.13995 -3.50 0.00047 ***
## gre 0.00226 0.00109 2.07 0.03847 *
## gpa 0.80404 0.33182 2.42 0.01539 *
## rank2 -0.67544 0.31649 -2.13 0.03283 *
## rank3 -1.34020 0.34531 -3.88 0.00010 ***
## rank4 -1.55146 0.41783 -3.71 0.00020 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 499.98 on 399 degrees of freedom
## Residual deviance: 458.52 on 394 degrees of freedom
## AIC: 470.5
##
## Number of Fisher Scoring iterations: 4