如果我正确理解您想要什么,您可以使用 git 版本的 PyMC (PyMC3) 和 glm 子模块来执行此操作。例如
import numpy as np
import pymc as pm
import matplotlib.pyplot as plt
from pymc import glm
## Make some data
x = np.array(range(0,50))
y = np.random.uniform(low=0.0, high=40.0, size=50)
y = 2*x+y
## plt.scatter(x,y)
data = dict(x=x, y=y)
with pm.Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
pm.glm.glm('y ~ x', data)
step = pm.NUTS() # Instantiate MCMC sampling algorithm
trace = pm.sample(2000, step)
##fig = pm.traceplot(trace, lines={'alpha': 1, 'beta': 2, 'sigma': .5});## traces
fig = plt.figure()
ax = fig.add_subplot(111)
plt.scatter(x, y, label='data')
glm.plot_posterior_predictive(trace, samples=50, eval=x,
label='posterior predictive regression lines')
得到这样的东西
您应该会发现这些博客文章很有趣:
1和2来自我的想法。
编辑
为了获得每个 x 的 y 值,请尝试我从挖掘 glm 源中得到的这个。
lm = lambda x, sample: sample['Intercept'] + sample['x'] * x ## linear model
samples=50 ## Choose to be the same as in plot call
trace_det = np.empty([samples, len(x)]) ## initialise
for i, rand_loc in enumerate(np.random.randint(0, len(trace), samples)):
rand_sample = trace[rand_loc]
trace_det[i] = lm(x, rand_sample)
y = trace_det.T
y[0]
道歉,如果它不是最优雅的 - 希望你能遵循逻辑。