-
Notifications
You must be signed in to change notification settings - Fork 69
Description
This is for pymc-devs/pymc#3242 and pymc-devs/pymc#6992.
cc: @ricardoV94 @zaxtax
The first hackathon for this will be on Friday 31st May. But we plan to continue development on this beyond that day.
Approximate the marginal posterior distribution of some subset of the parameters, referred to as the marginal Laplace approximation. Then, integrate out the remaining parameters using another method.
This is great for latent Gaussian models.
Reading list for those who are interested
- jax implementation (500 lines of code ish, and then some sparse stuff)
- pymc3 effort
- Stan paper
- Dan Simpson's blog
- my rough notes
1. Laplace approximation (and misc)
- Implement Laplace approximation #341 Implement Laplace (quadratic) approximation #345
- Update Newton solver to find the mode of the Gaussian (for INLA) #342
- PyMC
MvNormal
with efficient calculation for logp (no inverse) Implement specialized MvNormal density based on precision matrix pymc#7345_precision_mv_normal_logp
should be obtained by a call to aPrecisionMVNormal
class instance refactor: move _precition_mv_normal_logp into a seperate function pymc#7895
2. Marginal Laplace approximation
- Implement Marginal Laplace approximation #344 using step sampler interface or
pmx.MarginalModel()
. Make basic INLA interface and simple marginalisation routine #533 - Use Adjoint method to find the gradient of the Laplace approximation/mode #343 Implement a minimizer for INLA #513. Implement symbolic
minimize
androot
Ops
pytensor#1182 Fixed point iterator pytensor#978 - Optimise
logp
runtime Calls topytensor.optimize.minimize
slow downlaplace_marginal_rv_logp
#568 - Minimise
Op
fails with varying parameter shapes Gradient of MinimizeOp fails with certain parameter shapes pytensor#1550 Use vectorized jacobian in Minimize Op pytensor#1582 - Un-marginalise the latent field Implement inference for the latent field in INLA #569
- Improvement: obtain dimension of the latents more robustly
laplace_marginal_rv_logp
should obtain latent dimension more robustly #570
3. API
- Glue all the above together and get it working in
pm.Model
. Make basic INLA interface and simple marginalisation routine #533 - Create an interface with bambi that can be accessed like R-INLA
4. Sparse matrix operations
INLA can work without it, but this is what will make it very quick and scalable and get it nearer to R-INLA performance.
This would lie in https://github.com/pymc-devs/pytensor/tree/main/pytensor/sparse. There is a jax implementation of all the parts we need.
- Implement sparse matrix in pytensor with ability to:
- add to the diagonal
- multiply by a vector
- solve a linear system
- compute a log determinant
- Implement a MVN distribution with a sparse precision matrix
5. Documentation and examples
- generalised linear mixed model example Make basic INLA interface and simple marginalisation routine #533
- Spatial stats example (maybe @elizavetasemenova)
- pymc ICAR example but rewritten using INLA
- Time series example setting up AR model with ICAR (see Dan footnote 41)
and more... Make basic INLA interface and simple marginalisation routine #533
Note, I will update and link to the issues/PRs once they are made. If you want to tackle one of these issues, comment below and I will update the list with your name.
If you have any more things to add, please comment and I will add them to the list and create issues.