您可以使用lmfit
(请参阅https://lmfit.github.io/lmfit-py/examples/example_expression_model.html)。
首先,让我们定义一个lor
函数(基于你的)
import numpy as np
def lor(x, R, F):
y = R / (R**2 + 4 * np.pi**2 * (x-F)**2)
return y
和一些样本数据
import matplotlib.pyplot as plt
x = np.linspace(0, 1000, 1000)
Rs = np.array([7, 8, 11, 11, 7.5, 10, 5, 6, 3])
Fs = np.array([20, 210, 230, 250, 270, 300, 790, 800, 810])
Ys = lor(x, Rs.reshape(-1, 1), Fs.reshape(-1, 1))
for _y in Ys:
plt.plot(x, _y)
现在让我们总结所有lor
曲线
y = Ys.sum(axis=0)
plt.plot(x, y)
假设这是您实际想要拟合的曲线。
为了拟合这条曲线,我们可以首先找到峰值
from scipy.signal import find_peaks
peaks_idx = find_peaks(y)
peaks_x = x[peaks_idx[0]]
peaks_y = y[peaks_idx[0]]
peaks_x
array([ 20.02002002, 210.21021021, 230.23023023, 250.25025025,
270.27027027, 300.3003003 , 789.78978979, 799.7997998 ,
809.80980981])
你看那是F
我们不同lor
功能的 s
plt.plot(x, y)
plt.plot(peaks_x, peaks_y, 'r.')
ExpressionModel
最后,我们根据函数导入lmfit
并定义模型字符串lor
并找到峰值
from lmfit.models import ExpressionModel
_mod = []
for i, p in enumerate(peaks_x):
_mod.append(f'R{i} / (R{i}**2 + 4 * pi**2 * (x-{p})**2)')
model = ' + '.join(_mod)
model
'R0 / (R0**2 + 4 * pi**2 * (x-20.02002002002002)**2) + R1 / (R1**2 + 4 * pi**2 * (x-210.21021021021022)**2) + R2 / (R2**2 + 4 * pi**2 * (x-230.23023023023026)**2) + R3 / (R3**2 + 4 * pi**2 * (x-250.25025025025028)**2) + R4 / (R4**2 + 4 * pi**2 * (x-270.2702702702703)**2) + R5 / (R5**2 + 4 * pi**2 * (x-300.3003003003003)**2) + R6 / (R6**2 + 4 * pi**2 * (x-789.7897897897899)**2) + R7 / (R7**2 + 4 * pi**2 * (x-799.7997997997999)**2) + R8 / (R8**2 + 4 * pi**2 * (x-809.8098098098098)**2)'
我们现在可以初始化模型和参数,这将只有R
s(因为我们已经有了F
s,峰值)
mod = ExpressionModel(model)
params = mod.make_params()
# set initial value of params to 1
for p in params.items():
mod.set_param_hint(p[0], value=1)
params = mod.make_params()
并拟合曲线(y
你想要拟合的曲线在哪里,即你得到的曲线)
result = mod.fit(y, x=x)
检查结果报告
print(result.fit_report())
[[Model]]
Model(_eval)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 81
# data points = 1000
# variables = 9
chi-square = 0.00426669
reduced chi-square = 4.3054e-06
Akaike info crit = -12346.6724
Bayesian info crit = -12302.5026
[[Variables]]
R0: 7.00213482 +/- 0.09828903 (1.40%) (init = 1)
R1: 8.19392889 +/- 0.13088578 (1.60%) (init = 1)
R2: 11.1504879 +/- 0.21714760 (1.95%) (init = 1)
R3: 11.1762542 +/- 0.21792883 (1.95%) (init = 1)
R4: 7.84104148 +/- 0.12104425 (1.54%) (init = 1)
R5: 10.2784143 +/- 0.19103875 (1.86%) (init = 1)
R6: 5.34471739 +/- 0.05795283 (1.08%) (init = 1)
R7: 6.25447147 +/- 0.07912175 (1.27%) (init = 1)
R8: 3.46872840 +/- 0.02446415 (0.71%) (init = 1)
并检查情节
fig, ax = plt.subplots(figsize=(10, 5))
plt.plot(x, y, 'b', lw=1, label='observed')
plt.plot(x, result.best_fit, 'r-', label='best fit', ls='--')
plt.legend(loc='best')
plt.show()