Skip to content

Latest commit

 

History

History
51 lines (51 loc) · 2 KB

2022-06-28-blau22a.md

File metadata and controls

51 lines (51 loc) · 2 KB
title booktitle abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Optimizing Sequential Experimental Design with Deep Reinforcement Learning
Proceedings of the 39th International Conference on Machine Learning
Bayesian approaches developed to solve the optimal design of sequential experiments are mathematically elegant but computationally challenging. Recently, techniques using amortization have been proposed to make these Bayesian approaches practical, by training a parameterized policy that proposes designs efficiently at deployment time. However, these methods may not sufficiently explore the design space, require access to a differentiable probabilistic model and can only optimize over continuous design spaces. Here, we address these limitations by showing that the problem of optimizing policies can be reduced to solving a Markov decision process (MDP). We solve the equivalent MDP with modern deep reinforcement learning techniques. Our experiments show that our approach is also computationally efficient at deployment time and exhibits state-of-the-art performance on both continuous and discrete design spaces, even when the probabilistic model is a black box.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
blau22a
0
Optimizing Sequential Experimental Design with Deep Reinforcement Learning
2107
2128
2107-2128
2107
false
Blau, Tom and Bonilla, Edwin V. and Chades, Iadine and Dezfouli, Amir
given family
Tom
Blau
given family
Edwin V.
Bonilla
given family
Iadine
Chades
given family
Amir
Dezfouli
2022-06-28
Proceedings of the 39th International Conference on Machine Learning
162
inproceedings
date-parts
2022
6
28