Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FAVI vs NPE vs NSBI... #4

Open
jacksonloper opened this issue Jul 12, 2024 · 1 comment
Open

FAVI vs NPE vs NSBI... #4

jacksonloper opened this issue Jul 12, 2024 · 1 comment

Comments

@jacksonloper
Copy link

Two things.

First thing: a suggestion to mention in the intro some of the other myriad names that point to nearly-identical ideas to neural SBI, such as NPE (neural posterior estimation) and FAVI (forward amortized variational inference) and perhaps oldest of all, sleep training (from the wake-sleep algorithm).

Second thing: is a plea for help. Is there any hope that the field will converge on one name for this? I'm personally quite fond of neural simulation based-inference, but currently in google scholar NPE gets about 452 papers whereas NSBI gets only 111. Do you have any clarity on this matter?

Many thanks!

Jackson

@marvinschmitt
Copy link

marvinschmitt commented Oct 11, 2024

@jacksonloper I feel you and second that plea for a unified taxonomy.

To make things worse, "neural posterior estimation" is often used for "neural posterior estimation with normalizing flows" because this generative model class happened to be one of the first to gain widespread traction.

I personally try to disentangle as follows:

  • simulation-based inference: Inverse problem solvers that are trained without requiring access to an explicit density, i.e., only on simulations from the forward model. Examples include ABC-SMC or rejection sampling, but also neural methods. I guess we can be glad that "simulation-based inference" seems to slowly replace "likelihood-free inference", where the latter is not technically correct because the likelihood exists but just implicitly.
  • neural simulation-based inference: SBI with neural networks, targeting for example the posterior, the likelihood, or the likelihood-to-evidence ratio. Can have any generative model backbone, for example normalizing flows, flow matching, consistency models, score-based diffusion. Can be sequential (multi-rounds to narrow training simulations down to the observed data set of interest) or amortized (not narrowing down to remain general).
  • amortized inference: methods that have a two-stage approach consisting of (1) training on the joint model and (2) inference on any data set that's compatible with the joint model. Amortized inference methods are usually neural network based because NNs have a neat way to re-cast probabilistic model fitting as a NN prediction task. Amortized inference can be simulation-based (e.g., forward KL with normalizing flows) or likelihood-based (e.g., add explicit likelihood information as self-consistency loss while remaining amortized during inference, Link).

I would love to hear your thoughts on this, and also on ideas how we can expedite the process of the field converging to some sensible taxonomy.

Cheers,
Marvin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants