165

假设正态分布,我有我想计算置信区间的样本数据。

我已经找到并安装了 numpy 和 scipy 包,并且已经让 numpy 返回平均值和标准差(numpy.mean(data),数据为列表)。任何有关获得样本置信区间的建议将不胜感激。

4

6 回答 6

221
import numpy as np
import scipy.stats


def mean_confidence_interval(data, confidence=0.95):
    a = 1.0 * np.array(data)
    n = len(a)
    m, se = np.mean(a), scipy.stats.sem(a)
    h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
    return m, m-h, m+h

你可以这样计算。

于 2013-02-22T22:18:58.337 回答
180

这是 shasan 代码的缩短版本,计算数组平均值的 95% 置信区间a

import numpy as np, scipy.stats as st

st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))

但是使用 StatsModels'tconfint_mean可以说更好:

import statsmodels.stats.api as sms

sms.DescrStatsW(a).tconfint_mean()

两者的基本假设是样本(数组a)独立于具有未知标准偏差的正态分布(参见MathWorldWikipedia)。

对于大样本量 n,样本均值呈正态分布,可以使用st.norm.interval()(如 Jaime 的评论中所建议的)计算其置信区间。但上述解决方案对于小 n 也是正确的,其中st.norm.interval()给出的置信区间太窄(即“假置信”)。有关更多详细信息,请参阅我对类似问题的回答(以及 Russ 的评论之一)。

这是一个示例,其中正确的选项给出(基本上)相同的置信区间:

In [9]: a = range(10,14)

In [10]: mean_confidence_interval(a)
Out[10]: (11.5, 9.4457397432391215, 13.554260256760879)

In [11]: st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))
Out[11]: (9.4457397432391215, 13.554260256760879)

In [12]: sms.DescrStatsW(a).tconfint_mean()
Out[12]: (9.4457397432391197, 13.55426025676088)

最后,使用不正确的结果st.norm.interval()

In [13]: st.norm.interval(0.95, loc=np.mean(a), scale=st.sem(a))
Out[13]: (10.23484868811834, 12.76515131188166)
于 2015-12-26T18:56:16.667 回答
24

开始Python 3.8,标准库将NormalDist对象作为statistics模块的一部分提供:

from statistics import NormalDist

def confidence_interval(data, confidence=0.95):
  dist = NormalDist.from_samples(data)
  z = NormalDist().inv_cdf((1 + confidence) / 2.)
  h = dist.stdev * z / ((len(data) - 1) ** .5)
  return dist.mean - h, dist.mean + h

这:

  • NormalDist从数据样本 (中创建一个对象,这使我们可以通过和NormalDist.from_samples(data)访问样本的均值和标准差。NormalDist.meanNormalDist.stdev

  • 使用累积分布函数 ( ) 的倒数,根据给定置信度Z-score的标准正态分布(由 表示)计算。NormalDist()inv_cdf

  • 根据样本的标准差和均值生成置信区间。


这假设样本量足够大(假设超过约 100 个点),以便使用标准正态分布而不是学生的 t 分布来计算z值。

于 2019-03-20T22:06:14.613 回答
17

首先从查找表中查找所需置信区间的z 值。置信区间为,其中是您的样本均值的估计标准差,由 给出,其中是根据您的样本数据计算的标准差,是您的样本量。mean +/- z*sigmasigmasigma = s / sqrt(n)sn

于 2013-02-22T22:15:04.163 回答
0

关于 Ulrich 的回答——即使用 t 值。当真正的方差未知时,我们使用它。这是当您拥有的唯一数据是样本数据时。

对于 bogatron 的回答,这涉及 z 表。在已知并提供方差时使用 z 表。然后你也有样本数据。Sigma 不是样本均值的估计标准差。这是众所周知的。

所以假设方差是已知的,我们想要 95% 的置信度:

from scipy.stats import norm
alpha = 0.95
# Define our z
ci = alpha + (1-alpha)/2
#Lower Interval, where n is sample siz
c_lb = sample_mean - norm.ppf(ci)*((sigma/(n**0.5)))
c_ub = sample_mean + norm.ppf(ci)*((sigma/(n**0.5)))

只有样本数据和未知方差(意味着必须仅根据样本数据计算方差),Ulrich 的答案非常有效。但是,您可能希望指定置信区间。如果您的数据是 a 并且您希望置信区间为 0.95:

import statsmodels.stats.api as sms
conf = sms.DescrStatsW(a).tconfint_mean(alpha=0.05)
conf
于 2021-11-23T22:01:23.227 回答
0

基于原始但有一些具体的例子:

import numpy as np

def mean_confidence_interval(data, confidence: float = 0.95) -> tuple[float, np.ndarray]:
    """
    Returns (tuple of) the mean and confidence interval for given data.
    Data is a np.arrayable iterable.

    ref:
        - https://stackoverflow.com/a/15034143/1601580
        - https://github.com/WangYueFt/rfs/blob/f8c837ba93c62dd0ac68a2f4019c619aa86b8421/eval/meta_eval.py#L19
    """
    import scipy.stats
    import numpy as np

    a: np.ndarray = 1.0 * np.array(data)
    n: int = len(a)
    if n == 1:
        import logging
        logging.warning('The first dimension of your data is 1, perhaps you meant to transpose your data? or remove the'
                        'singleton dimension?')
    m, se = a.mean(), scipy.stats.sem(a)
    tp = scipy.stats.t.ppf((1 + confidence) / 2., n - 1)
    h = se * tp
    return m, h

def ci_test_float():
    import numpy as np
    # - one WRONG data set of size 1 by N
    data = np.random.randn(1, 30)  # gives an error becuase len sets n=1, so not this shape!
    m, ci = mean_confidence_interval(data)
    print('-- you should get a mean and a list of nan ci (since data is in wrong format, it thinks its 30 data sets of '
          'length 1.')
    print(m, ci)

    # right data as N by 1
    data = np.random.randn(30, 1)
    m, ci = mean_confidence_interval(data)
    print('-- gives a mean and a list of length 1 for a single CI (since it thinks you have a single dat aset)')
    print(m, ci)

    # multiple data sets (7) of size N (=30)
    data = np.random.randn(30, 7)
    print('-- gives 7 CIs for the 7 data sets of length 30. 30 is the number ud want large if you were using z(p)'
          'due to the CLT.')
    m, ci = mean_confidence_interval(data)
    print(m, ci)

ci_test_float()

输出:

-- you should get a mean and a list of nan ci (since data is in wrong format, it thinks its 30 data sets of length 1.
0.1431623130952463 [nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan]
-- gives a mean and a list of length 1 for a single CI (since it thinks you have a single dat aset)
0.04947206018132864 [0.40627264]
-- gives 7 CIs for the 7 data sets of length 30. 30 is the number ud want large if you were using z(p)due to the CLT.
-0.03585104402718902 [0.31867309 0.35619134 0.34860011 0.3812853  0.44334033 0.35841138
 0.40739732]

我认为 Num_datasets 的 Num_samples 是正确的,但如果没有在评论部分告诉我。


作为奖励,一个几乎只使用火炬的火炬实现:

def torch_compute_confidence_interval(data: Tensor,
                                      confidence: float = 0.95
                                      ) -> Tensor:
    """
    Computes the confidence interval for a given survey of a data set.
    """
    n: int = len(data)
    mean: Tensor = data.mean()
    # se: Tensor = scipy.stats.sem(data)  # compute standard error
    # se, mean: Tensor = torch.std_mean(data, unbiased=True)  # compute standard error
    se: Tensor = data.std(unbiased=True) / (n ** 0.5)
    t_p: float = float(scipy.stats.t.ppf((1 + confidence) / 2., n - 1))
    ci = t_p * se
    return mean, ci

关于 CI 的一些评论(或参见https://stats.stackexchange.com/questions/554332/confidence-interval-given-the-population-mean-and-standard-deviation?noredirect=1&lq=1):

"""
Review for confidence intervals. Confidence intervals say that the true mean is inside the estimated confidence interval
(the r.v. the user generates). In particular it says:
    Pr[mu^* \in [mu_n +- t.val(p) * std_n / sqrt(n) ] ] >= p
e.g. p = 0.95
This does not say that for a specific CI you compute the true mean is in that interval with prob 0.95. Instead it means
that if you surveyed/sampled 100 data sets D_n = {x_i}^n_{i=1} of size n (where n is ideally >=30) then for 95 of those
you'd expect to have the truee mean inside the CI compute for that current data set. Note you can never check for which
ones mu^* is in the CI since mu^* is unknown. If you knew mu^* you wouldn't need to estimate it. This analysis assumes
that the the estimator/value your estimating is the true mean using the sample mean (estimator). Since it usually uses
the t.val or z.val (second for the standardozed r.v. of a normal) then it means the approximation that mu_n ~ gaussian
must hold. This is most likely true if n >= 0. Note this is similar to statistical learning theory where we use
the MLE/ERM estimator to choose a function with delta, gamma etc reasoning. Note that if you do algebra you can also
say that the sample mean is in that interval but wrt mu^* but that is borning, no one cares since you do not know mu^*
so it's not helpful.

An example use could be for computing the CI of the loss (e.g. 0-1, CE loss, etc). The mu^* you want is the expected
risk. So x_i = loss(f(x_i), y_i) and you are computing the CI for what is the true expected risk for that specific loss
function you choose. So mu_n = emperical mean of the loss and std_n = (unbiased) estimate of the std and then you can
simply plug in the values.

Assumptions for p-CI:
    - we are making a statement that mu^* is in mu+-pCI = mu+-t_p * sig_n / sqrt n, sig_n ~ Var[x] is inside the CI
    p% of the time.
    - we are estimating mu^, a mean
    - since the quantity of interest is mu^, then the z_p value (or p-value, depending which one is the unknown), is
    computed using the normal distribution.
    - p(mu) ~ N(mu; mu_n, sig_n/ sqrt n), vial CTL which holds for sample means. Ideally n >= 30.
    - x ~ p^*(x) are iid.

Std_n vs t_p*std_n/ sqrt(n)
    - std_n = var(x) is more pessimistic but holds always. Never shrinks as n->infity
    - but if n is small then pCI might be too small and your "lying to yourself". So if you have very small data
    perhaps doing std_n for the CI is better. That holds with prob 99.9%. Hopefuly std is not too large for your
    experiments to be invalidated.

ref:
    - https://stats.stackexchange.com/questions/554332/confidence-interval-given-the-population-mean-and-standard-deviation?noredirect=1&lq=1
    - https://stackoverflow.com/questions/70356922/what-is-the-proper-way-to-compute-95-confidence-intervals-with-pytorch-for-clas
    - https://www.youtube.com/watch?v=MzvRQFYUEFU&list=PLUl4u3cNGP60hI9ATjSFgLZpbNJ7myAg6&index=205
"""
于 2022-02-16T19:53:27.913 回答