为了曲线拟合的目的,我正在为离散计数数据实施最大似然估计器,使用 的结果curve_fit
作为minimize
. 我为多个发行版定义并尝试了这些方法,但为了简单起见,我只包括一个,它是一个 logseries 发行版。
在这一点上,我还尝试了 statsmodels 方法中的以下方法:
- statsmodels.discrete.discrete_model.fit
- statsmodels.discrete.count_model.fit
- statsmodels.base.model.GenericLikelihoodModel
大多数曲线拟合往往会遇到溢出错误或内部的 nans 和 zeros。我将在另一篇文章中详细说明这些错误
#Import a few packages
import numpy as np
from scipy.optimize import curve_fit
from scipy.optimize import minimize
from scipy import stats
from numpy import log
import numpy as np
import matplotlib.pyplot as plt
#Given data
x=np.arange(1, 28, 1)
y=np.array([18899, 10427, 6280, 4281, 2736, 1835, 1158, 746, 467, 328, 201, 129, 65, 69, 39, 21, 15, 10, 3, 3, 1, 1, 1, 1, 1, 1, 1])
#Define a custom distribution
def Logser(x, p):
return (-p**x)/(x*log(1-p))
#Doing a least squares curve fit
def lsqfit(x, y):
cf_result = curve_fit(Logser, x, y, p0=0.7, bounds=(0.5,1), method='trf')
return cf_result
param_guess=lsqfit(x,y)[0][0]
print(param_guess)
#Doing a custom MLE definition, minimized using the scipy minimize function
def MLERegression(param_guess):
yhat = Logser(x, param_guess) # predictions based on a parameter value
sd=1 #initially guessed for fitting a normal distribution error around the regressed curve
# next, we flip the Bayesian question
# compute PDF of observed values normally distributed around mean (yhat)
# with a standard deviation of sd
negLL = -np.sum( stats.norm.logpdf(y, loc=yhat, scale=sd) ) #log of the probability density function
return negLL
results = minimize(MLERegression, param_guess, method='L-BFGS-B', bounds=(0.5,1.0), options={'disp': True})
final_param=results['x']
print(final_param)
我已经限制优化器给我的结果类似于我的预期,(参数值在 0.8 或 0.9 左右).. 否则算法输出零