Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I293 nuts better dual averaging #1112

Merged
merged 111 commits into from
Aug 25, 2020
Merged

Conversation

martinjrobins
Copy link
Member

Fixes #293

This is an update of the previous nuts pr, with better dual averaging to more closely match STAN

also removes slice sampling option, as was too difficult to keep in step with the monomial (sp?) sampling

martinjrobins and others added 30 commits January 22, 2020 17:32
@MichaelClerx
Copy link
Member

@martinjrobins I've done the simpler coverage issues, but there's still a few to tick off in the actual nuts/dual-averaging code. (You can see them on the "Diff" tab, not sure why the other tabs in codecov don't work for this)

@martinjrobins
Copy link
Member Author

@MichaelClerx, all changes done, can you check if this looks ok then we can merge? I've tested everything except for one branch of an if statement, which I don't think is worth testing explicitly.

@MichaelClerx
Copy link
Member

Are you sure that whatever causes the inf we test for here, doesn't cause problems somewhere else?

@martinjrobins
Copy link
Member Author

you mean the nan coming from the logistic model? It might do, do we have a pints policy on models returning nans? we need to test nuts more thoroughly, and in conjunction with transformations since that is what is done in Stan etc. So I'd like to merge this in (and the transform and interfaces branch) before doing more testing.

@MichaelClerx
Copy link
Member

you mean the nan coming from the logistic model? It might do, do we have a pints policy on models returning nans? we need to test nuts more thoroughly, and in conjunction with transformations since that is what is done in Stan etc. So I'd like to merge this in (and the transform and interfaces branch) before doing more testing.

Is that where the NaN is coming from, from a NaN returned by the logistic model? We've got an open issue about a "policy" for this at #794
But my concern is that this NaN could easily cause trouble elsewhere in the method, so I'm not sure hiding this with a pragma is a good idea?

I'll do a quick check and then we can merge, maybe leave open an issue to check its behaviour if a NaN gets returned?

@martinjrobins martinjrobins merged commit 6061a7b into master Aug 25, 2020
@martinjrobins martinjrobins deleted the i293-nuts-better-dual-averaging branch August 25, 2020 11:24
@MichaelClerx
Copy link
Member

🥳

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement No-U-Turn Sampler (NUTS)
3 participants