有没有办法使用 numpy.percentile 函数来计算加权百分位数?或者有人知道计算加权百分位数的替代python函数吗?
谢谢!
有没有办法使用 numpy.percentile 函数来计算加权百分位数?或者有人知道计算加权百分位数的替代python函数吗?
谢谢!
这是我使用的代码。它不是一个最佳方案(我无法用它来写numpy
),但仍然比公认的解决方案更快、更可靠
def weighted_quantile(values, quantiles, sample_weight=None,
values_sorted=False, old_style=False):
""" Very close to numpy.percentile, but supports weights.
NOTE: quantiles should be in [0, 1]!
:param values: numpy.array with data
:param quantiles: array-like with many quantiles needed
:param sample_weight: array-like of the same length as `array`
:param values_sorted: bool, if True, then will avoid sorting of
initial array
:param old_style: if True, will correct output to be consistent
with numpy.percentile.
:return: numpy.array with computed quantiles.
"""
values = np.array(values)
quantiles = np.array(quantiles)
if sample_weight is None:
sample_weight = np.ones(len(values))
sample_weight = np.array(sample_weight)
assert np.all(quantiles >= 0) and np.all(quantiles <= 1), \
'quantiles should be in [0, 1]'
if not values_sorted:
sorter = np.argsort(values)
values = values[sorter]
sample_weight = sample_weight[sorter]
weighted_quantiles = np.cumsum(sample_weight) - 0.5 * sample_weight
if old_style:
# To be convenient with numpy.percentile
weighted_quantiles -= weighted_quantiles[0]
weighted_quantiles /= weighted_quantiles[-1]
else:
weighted_quantiles /= np.sum(sample_weight)
return np.interp(quantiles, weighted_quantiles, values)
例子:
weighted_quantile([1, 2, 9, 3.2, 4], [0.0, 0.5, 1.])
数组([ 1. , 3.2, 9. ])
weighted_quantile([1, 2, 9, 3.2, 4], [0.0, 0.5, 1.], sample_weight=[2, 1, 2, 4, 1])
数组([ 1. , 3.2, 9. ])
一个快速的解决方案,首先排序然后插值:
def weighted_percentile(data, percents, weights=None):
''' percents in units of 1%
weights specifies the frequency (count) of data.
'''
if weights is None:
return np.percentile(data, percents)
ind=np.argsort(data)
d=data[ind]
w=weights[ind]
p=1.*w.cumsum()/w.sum()*100
y=np.interp(percents, p, d)
return y
我不知道加权百分位数是什么意思,但从@Joan Smith 的回答看来,您似乎只需要重复 中的每个元素ar
,您可以使用numpy.repeat()
:
import numpy as np
np.repeat([1,2,3], [4,5,6])
结果是:
array([1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3])
为额外的(非原创的)答案道歉(没有足够的代表评论@nayyarv's)。他的解决方案对我有用(即它复制了 的默认行为),但我认为您可以通过原始编写np.percentage
方式的线索来消除 for 循环。np.percentage
def weighted_percentile(a, q=np.array([75, 25]), w=None):
"""
Calculates percentiles associated with a (possibly weighted) array
Parameters
----------
a : array-like
The input array from which to calculate percents
q : array-like
The percentiles to calculate (0.0 - 100.0)
w : array-like, optional
The weights to assign to values of a. Equal weighting if None
is specified
Returns
-------
values : np.array
The values associated with the specified percentiles.
"""
# Standardize and sort based on values in a
q = np.array(q) / 100.0
if w is None:
w = np.ones(a.size)
idx = np.argsort(a)
a_sort = a[idx]
w_sort = w[idx]
# Get the cumulative sum of weights
ecdf = np.cumsum(w_sort)
# Find the percentile index positions associated with the percentiles
p = q * (w.sum() - 1)
# Find the bounding indices (both low and high)
idx_low = np.searchsorted(ecdf, p, side='right')
idx_high = np.searchsorted(ecdf, p + 1, side='right')
idx_high[idx_high > ecdf.size - 1] = ecdf.size - 1
# Calculate the weights
weights_high = p - np.floor(p)
weights_low = 1.0 - weights_high
# Extract the low/high indexes and multiply by the corresponding weights
x1 = np.take(a_sort, idx_low) * weights_low
x2 = np.take(a_sort, idx_high) * weights_high
# Return the average
return np.add(x1, x2)
# Sample data
a = np.array([1.0, 2.0, 9.0, 3.2, 4.0], dtype=np.float)
w = np.array([2.0, 1.0, 3.0, 4.0, 1.0], dtype=np.float)
# Make an unweighted "copy" of a for testing
a2 = np.repeat(a, w.astype(np.int))
# Tests with different percentiles chosen
q1 = np.linspace(0.0, 100.0, 11)
q2 = np.linspace(5.0, 95.0, 10)
q3 = np.linspace(4.0, 94.0, 10)
for q in (q1, q2, q3):
assert np.all(weighted_percentile(a, q, w) == np.percentile(a2, q))
将此参考用于加权百分位数方法更清洁、更简单。
import numpy as np
def weighted_percentile(data, weights, perc):
"""
perc : percentile in [0-1]!
"""
ix = np.argsort(data)
data = data[ix] # sort data
weights = weights[ix] # sort weights
cdf = (np.cumsum(weights) - 0.5 * weights) / np.sum(weights) # 'like' a CDF function
return np.interp(perc, cdf, data)
这似乎现在在 statsmodels 中实现
from statsmodels.stats.weightstats import DescrStatsW
wq = DescrStatsW(data=np.array([1, 2, 9, 3.2, 4]), weights=np.array([0.0, 0.5, 1.0, 0.3, 0.5]))
wq.quantile(probs=np.array([0.1, 0.9]), return_pandas=False)
# array([2., 9.])
DescrStatsW 对象还实现了其他方法,例如加权平均等。https://www.statsmodels.org/stable/generated/statsmodels.stats.weightstats.DescrStatsW.html
我使用这个功能来满足我的需要:
def quantile_at_values(values, population, weights=None):
values = numpy.atleast_1d(values).astype(float)
population = numpy.atleast_1d(population).astype(float)
# if no weights are given, use equal weights
if weights is None:
weights = numpy.ones(population.shape).astype(float)
normal = float(len(weights))
# else, check weights
else:
weights = numpy.atleast_1d(weights).astype(float)
assert len(weights) == len(population)
assert (weights >= 0).all()
normal = numpy.sum(weights)
assert normal > 0.
quantiles = numpy.array([numpy.sum(weights[population <= value]) for value in values]) / normal
assert (quantiles >= 0).all() and (quantiles <= 1).all()
return quantiles
如果您想要百分位数而不是分位数,请将结果乘以 100。
正如评论中提到的,对于浮点权重来说,简单地重复值是不可能的,对于非常大的数据集也是不切实际的。这里有一个加权百分位数的库:http: //kochanski.org/gpk/code/speechresearch/gmisclib/gmisclib.weighted_percentile-module.html 它对我有用。
def weighted_percentile(a, percentile = np.array([75, 25]), weights=None):
"""
O(nlgn) implementation for weighted_percentile.
"""
percentile = np.array(percentile)/100.0
if weights is None:
weights = np.ones(len(a))
a_indsort = np.argsort(a)
a_sort = a[a_indsort]
weights_sort = weights[a_indsort]
ecdf = np.cumsum(weights_sort)
percentile_index_positions = percentile * (weights.sum()-1)+1
# need the 1 offset at the end due to ecdf not starting at 0
locations = np.searchsorted(ecdf, percentile_index_positions)
out_percentiles = np.zeros(len(percentile_index_positions))
for i, empiricalLocation in enumerate(locations):
# iterate across the requested percentiles
if ecdf[empiricalLocation-1] == np.floor(percentile_index_positions[i]):
# i.e. is the percentile in between 2 separate values
uppWeight = percentile_index_positions[i] - ecdf[empiricalLocation-1]
lowWeight = 1 - uppWeight
out_percentiles[i] = a_sort[empiricalLocation-1] * lowWeight + \
a_sort[empiricalLocation] * uppWeight
else:
# i.e. the percentile is entirely in one bin
out_percentiles[i] = a_sort[empiricalLocation]
return out_percentiles
这是我的功能,它给出相同的行为
np.percentile(np.repeat(a, weights), percentile)
内存开销更少。np.percentile 是一个 O(n) 实现,因此对于小权重它可能更快。它整理了所有边缘情况 - 这是一个精确的解决方案。上面的插值答案假设是线性的,当它在大多数情况下是一个步骤时,除了权重为 1 时。
假设我们有权重为 [3, 11, 7] 的数据 [1,2,3],我想要 25% 的百分位数。我的 ecdf 将是 [3, 10, 21] 我正在寻找第 5 个值。插值将看到 [3,1] 和 [10, 2] 作为匹配项,并且插值给出 1.28,尽管完全在值为 2 的第二个 bin 中。
import weightedcalcs as wc
import pandas as pd
df = pd.DataFrame({'v': [1, 2, 3], 'w': [3, 2, 1]})
calc = wc.Calculator('w') # w designates weight
calc.quantile(df, 'v', 0.5)
# 1.5
不幸的是,numpy 并没有为所有东西提供内置的加权函数,但是,你总是可以把一些东西放在一起。
def weight_array(ar, weights):
zipped = zip(ar, weights)
weighted = []
for a, w in zipped:
for j in range(w):
weighted.append(a)
return weighted
np.percentile(weight_array(ar, weights), 25)
这是我的解决方案:
def my_weighted_perc(data,perc,weights=None):
if weights==None:
return nanpercentile(data,perc)
else:
d=data[(~np.isnan(data))&(~np.isnan(weights))]
ix=np.argsort(d)
d=d[ix]
wei=weights[ix]
wei_cum=100.*cumsum(wei*1./sum(wei))
return interp(perc,wei_cum,d)
它只是计算数据的加权 CDF,然后用于估计加权百分位数。