I have a very intrinsic confusion regarding logp. I will like to explain through anexample on na webpage so that I don't fall short of explaining it well.
I wrote disaster_model.py as illlustrated in this tutorial: http://pymc-devs.github.io/pymc/tutorial.html
I launched a python shell and after importing all required modules, I did the following
In [2]: import disaster_model
Out[2]: -2.9780301980174
In [3]: disaster_model.switchpoint.logp
Out[3]: -4.709530201312334
In [4]: disaster_model.late_mean.logp
Out[4]: -2.407183392124894
In [5]: disaster_model.early_mean.logp
Out[5]: -2.9780301980174
M = MCMC(disaster_model)
M.sample(iter = 10000, burn = 1000, thin = 10)
In [11]: M.switchpoint.logp
Out[11]: -4.709530201312334
In [12]: M.early_mean.logp
Out[12]: -3.2263189370368117
In [13]: M.late_mean.logp
Out[13]: -0.9012784557735074
In [14]: M.disasters.logp
Out[14]: -164.37141285002255
I will reemphasize the line (written in disaster_model.py)
disasters = Poisson('disasters', mu=rate, value=disasters_array, observed=True
Hence value of disasters is never going to change.
Now my question is
1) Why did log probabilities change of every variable except switchpoint?
(Kindly explain why the log probabilties should change, and if they should, then why swithpoint's didn't)
2) What do old and new log probabilities represent ?
(It was ipython shell and not python, but it hardly matters)