Skip to content

Conversation

@devmotion
Copy link
Member

This PR adds proper docstrings to the included distributions, improves the implementation by implementing sampler and fixing some type instability issues, and prepares the code for Distributions 0.24 (which fixes the logpdf/pdf weirdness for univariate distributions).

I noticed that we currently can't support Distributions 0.24 since it requires TuringLang/DynamicPPL.jl#150 (both AdvancedMH and DynamicPPL block the update, and the latest versions of AdvancedMH are only compatible with AbstractMCMC 2, so it has to be supported by DynamicPPL and Turing as well).

Probably similarly the random measures could be improved.

@codecov
Copy link

codecov bot commented Oct 7, 2020

Codecov Report

Merging #1431 into master will decrease coverage by 1.47%.
The diff coverage is 30.95%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1431      +/-   ##
==========================================
- Coverage   67.03%   65.56%   -1.48%     
==========================================
  Files          25       25              
  Lines        1617     1661      +44     
==========================================
+ Hits         1084     1089       +5     
- Misses        533      572      +39     
Impacted Files Coverage Δ
src/Turing.jl 100.00% <ø> (ø)
src/stdlib/distributions.jl 30.76% <30.95%> (-18.17%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e8e24f8...54edcef. Read the comment docs.

Comment on lines +217 to +219
struct LogPoisson{T<:Real,S} <: DiscreteUnivariateDistribution
logλ::T
λ::S
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth adding an extra field here to remove the exp call later?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong opinion on this one. It will be (slightly) more efficient if you evaluate the logpdf at least twice or sample twice, and it won't be less efficient if you evaluate or sample at least once. I noticed that it is possible to cache it after having updated the BinomialLogit distribution (in this case the cached computations are definitely more expensive though).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, then I'm happy. Let's stick with this for now.

@devmotion devmotion merged commit 6dca57e into TuringLang:master Oct 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants