Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Online EM doesn't always retrieve sufficient statistics correctly #461

Closed
mhoward2718 opened this issue May 20, 2015 · 3 comments
Closed
Assignees
Milestone

Comments

@mhoward2718
Copy link
Contributor

Depending on how the model and transition is defined, Online EM won't always correctly retrieve sufficient statistics when using algorithms like BP or MH.

@bruttenberg
Copy link
Collaborator

Can you either add more detail on this bug or post some code to reproduce it? There’s no way for anyone else to look into this given the description.

Brian Ruttenberg
Charles River Analytics Inc.
617.491.3474 x730
www.cra.comhttp://www.cra.com/

From: mhoward2718 [mailto:notifications@github.com]
Sent: Wednesday, May 20, 2015 12:23 PM
To: p2t2/figaro
Subject: [figaro] Online EM doesn't always retrieve sufficient statistics correctly (#461)

Depending on how the model and transition is defined, Online EM won't always correctly retrieve sufficient statistics when using algorithms like BP or MH.


Reply to this email directly or view it on GitHubhttps://github.com//issues/461.

@bruttenberg bruttenberg added this to the Figaro 3.2 milestone May 21, 2015
@bruttenberg
Copy link
Collaborator

Michael, can you please either add the code to reproduce this or provide a description of the bug?

@mhoward2718
Copy link
Contributor Author

Variable Elimination uses a special factor type to compute sufficient statistics. The other algorithms compute statistics from algorithm.distribution(). In batch, these come out the same. In online situations where every parameterized variable in the model has evidence applied to it, they also come out the same. But, when the parameterized variables do not always have evidence in every time step, the statistics computed by BP, Importance and MH don't match what's produced by VE. So, they are at least inconsistent with each other, and I think what's computed by VE is correct.

By the way, #462 doesn't completely fix the problem and so this bug should stay open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants