Skip to content

Latest commit

 

History

History
52 lines (52 loc) · 2.06 KB

2022-06-28-bai22b.md

File metadata and controls

52 lines (52 loc) · 2.06 KB
title booktitle abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Near-Optimal Learning of Extensive-Form Games with Imperfect Information
Proceedings of the 39th International Conference on Machine Learning
This paper resolves the open question of designing near-optimal algorithms for learning imperfect-information extensive-form games from bandit feedback. We present the first line of algorithms that require only $\widetilde{\mathcal{O}}((XA+YB)/\varepsilon^2)$ episodes of play to find an $\varepsilon$-approximate Nash equilibrium in two-player zero-sum games, where $X,Y$ are the number of information sets and $A,B$ are the number of actions for the two players. This improves upon the best known sample complexity of $\widetilde{\mathcal{O}}((X^2A+Y^2B)/\varepsilon^2)$ by a factor of $\widetilde{\mathcal{O}}(\max\{X, Y\})$, and matches the information-theoretic lower bound up to logarithmic factors. We achieve this sample complexity by two new algorithms: Balanced Online Mirror Descent, and Balanced Counterfactual Regret Minimization. Both algorithms rely on novel approaches of integrating <em>balanced exploration policies</em> into their classical counterparts. We also extend our results to learning Coarse Correlated Equilibria in multi-player general-sum games.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
bai22b
0
Near-Optimal Learning of Extensive-Form Games with Imperfect Information
1337
1382
1337-1382
1337
false
Bai, Yu and Jin, Chi and Mei, Song and Yu, Tiancheng
given family
Yu
Bai
given family
Chi
Jin
given family
Song
Mei
given family
Tiancheng
Yu
2022-06-28
Proceedings of the 39th International Conference on Machine Learning
162
inproceedings
date-parts
2022
6
28