Skip to content

Latest commit

 

History

History
90 lines (88 loc) · 69.1 KB

games.md

File metadata and controls

90 lines (88 loc) · 69.1 KB

Available games

Statuses:

  • 🟢: thoroughly-tested. In many cases, we verified against known values and/or reproduced results from papers.
  • 🔶: implemented but lightly tested.
  • ❌: known issues (see notes below and code for details).
Status Game Players Deterministic Perfect info Description
🔶 2048 1 A single player game where player aims to create a 2048 tile by merging other tiles.
🔶 Amazons 2 Move pieces on a board trying to block opponents from moving.
🔶 Atari 1 ❌ (most games) Agent plays classic games from Gym's Atari Environments, such as Breakout.
🟢 Backgammon 2 Players move their pieces through the board based on the rolls of dice.
🔶 Bargaining 2 Agents negotiate for items in a pool with different (hidden) valuations. References: DeVault et al. '15. Lewis et al. '17.
🔶 Battleship 2 Players place ships and shoot at each other in turns. References: Farina et al. '19, Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks.
🔶 Blackjack 1 Simplified version of blackjack, with only HIT/STAND moves.
🔶 Block Dominoes 2 Most simple version of dominoes. Consists of 28 tiles, featuring all combinations of spot counts (also called pips or dots) between zero and six.
🟢 Breakthrough 2 Simplified chess using only pawns.
🟢 Bridge 4 A card game where players compete in pairs.
🟢 (Uncontested) Bridge bidding 2 Players score points by forming specific sets with the cards in their hands.
🔶 Catch 1 Agent must move horizontally to 'catch' a descending ball. Designed to test basic learning. References: Mnih et al. 2014, Recurrent Models of Visual Attention. Osband et al '19, Behaviour Suite for Reinforcement Learning, Appendix A.
🔶 Checkers 2 Players move pieces around the board with the goal of eliminating the opposing pieces.
🔶 Cliff Walking 1 Agent must find goal without falling off a cliff. Designed to demonstrate exploration-with-danger. Sutton et al. '18, page 132.
🔶 Clobber 2 Simplified checkers, where tokens can capture neighbouring tokens. Designed to be amenable to combinatorial analysis.
🔶 Coin Game 2 Agents must collect their and their collaborator's tokens while avoiding a third kind of token. Designed to test divining of collaborator's intentions. References: Raileanu et al. '18, Modeling Others using Oneself in Multi-Agent Reinforcement Learning.
🔶 Colored Trails 3 Agents negotiations for chips that they they play on a colored grid to move closer to the goal. References: Ya'akov et al. '10. Fecici & Pfeffer '08. de Jong et al. '11.
🟢 Connect Four 2 Players drop tokens into columns to try and form a pattern.
🔶 Cooperative Box-Pushing 2 Agents must collaborate to push a box into the goal. Designed to test collaboration. References: Seuken & Zilberstein '12, Improved Memory-Bounded Dynamic Programming for Decentralized POMDPs.
🟢 Chess 2 Players move pieces around the board with the goal of eliminating the opposing pieces.
🔶 Crazy Eights 2 A precursor of UNO (see here).
🔶 Dark Hex 2 Hex, except the opponent's tokens are hidden (imperfect-information version).
🔶 Deep Sea 1 Agent must explore to find reward (first version) or penalty (second version). Designed to test exploration. References: Osband et al. '17, Deep Exploration via Randomized Value Functions.
🟢 Dots and Boxes 2 Players put lines between dots to form boxes to get points.
🔶 Dou Dizhu 3 A three-player games where one player (dizhu) plays against a team of two (peasants).
🔶 Euchre 4 Trick-taking card game where players compete in pairs.
🟢 First-price Sealed-Bid Auction 2-10 Agents submit bids simultaneously; highest bid wins, and that's the price paid.
🟢 Gin Rummy 2 Players score points by forming specific sets with the cards in their hands.
🟢 Go 2 Players place tokens on the board with the goal of encircling territory.
🟢 Goofspiel 2-10 Players bid with their cards to win other cards.
🟢 Hanabi 2-5 Players can see only other player's pieces, and everyone must cooperate to win. References: Bard et al. '19, The Hanabi Challenge: A New Frontier for AI Research. Implemented via Hanabi Learning Environment.
🟢 Havannah 2 Players add tokens to a hex grid to try and form a winning structure.
🟢 Hearts 3-6 A card game where players try to avoid playing the highest card in each round.
🔶 Hex 2 Players add tokens to a hex grid to try and link opposite sides of the board. References: Hex, the full story by Ryan Hayward and Bjarne Toft.
🔶 Kriegspiel 2 Chess with opponent's pieces unknown. Illegal moves have no effect - it remains the same player's turn until they make a legal move. References: Monte Carlo tree search in Kriegspiel. Game-Tree Search with Combinatorially Large Belief States, Parker 2005.
🟢 Kuhn poker 2 Simplified poker amenable to game-theoretic analysis.
🔶 Laser Tag 2 Agents see a local part of the grid, and attempt to tag each other with beams. References: Leibo et al. '17. Lanctot et al. '17.
🟢 Leduc poker 2 Simplified poker amenable to game-theoretic analysis. References: Southey et al. '05, Bayes’ bluff: Opponent modelling in poker.
🔶 Lewis Signaling 2 Receiver must choose an action dependent on the sender's hidden state. Designed to demonstrate the use of conventions.
🟢 Liar's Dice 2 Players bid and bluff on the state of all the dice together, given only the state of their dice.
🔶 Liar's Poker 2+ Players bid and bluff on the state of all hands, given only the state of their hand.
🔶 Mensch ärgere Dich nicht 2-4 Players roll dice to move their pegs toward their home row while throwing other players' pegs to the out area.
🔶 Mancala 2 Players take turns sowing beans on the board and try to capture more beans than the opponent.
🔶 Markov Soccer 2 Agents must take the ball to their goal, and can 'tackle' the opponent by predicting their next move. References: Littman '94, Markov games as a framework for multi-agent reinforcement learning. He et al. '16, Opponent Modeling in Deep Reinforcement Learning.
🟢 Matching Pennies (3-player) 3 Players must predict and match/oppose another player. Designed to have an unstable Nash equilibrium. References: Jordan '93.
🟢 Mean Field Game: crowd modelling n/a n/a n/a n/a
🟢 Mean Field Game: crowd modelling 2d n/a n/a n/a n/a
🟢 Mean Field Game: linear-quadratic n/a Players are uniformly distributed and are then incentivized to gather at the same point (The lower the distanbce wrt. the distribution mean position, the higher the reward). A mean-reverting term pushes the players towards the distribution, a gaussian noise term perturbs them. The players' actions alter their states linearly (alpha * a * dt) and the cost thereof is quadratic (K * a^2 * dt), hence the name. There exists an exact, closed form solution for the fully continuous version of this game. References: Perrin & al. 2019.
🟢 Mean Field Game: predator prey n/a n/a n/a n/a
🟢 Mean Field Game: routing n/a Representative player chooses at each node where they go. They has an origin, a destination and a departure time and chooses their route to minimize their travel time. Time spent on each link is a function of the distribution of players on the link when the player reaches the link. References: Cabannes et. al. '21, Solving N-player dynamic routing games with congestion: a mean field approach.
🔶 Morpion Solitaire (4D) 1 A single player game where player aims to maximize lines drawn on a grid, under certain limitations.
🟢 Negotiation 2 Agents with different utilities must negotiate an allocation of resources. References: Lewis et al. '17. Cao et al. '18.
🔶 Nim 2 Two agents take objects from distinct piles trying to either avoid taking the last one or take it. Any positive number of objects can be taken on each turn given they all come from the same pile.
🔶 Nine men's morris 2 Two players put and move stones on the board to try to form mills (three adjacent stones in a line) to capture the other player's stones.
🔶 Oh Hell 3-7 A card game where players try to win exactly a declared number of tricks.
🟢 Oshi-Zumo 2 Players must repeatedly bid to push a token off the other side of the board. References: Buro, 2004. Solving the oshi-zumo game. Bosansky et al. '16, Algorithms for Computing Strategies in Two-Player Simultaneous Move Games.
🟢 Oware 2 Players redistribute tokens from their half of the board to capture tokens in the opponent's part of the board.
🔶 Pathfinding 1-10 Agents must move to their destination. References: Austerweil et al. '15. Greenwald & Hall '03. Littman '01.
🟢 Pentago 2 Players place tokens on the board, then rotate part of the board to a new orientation.
🔶 Phantom Go 2 Go, except the opponent's stones are hidden. The analogue of Kriegspiel for Go. References: Cazenave '05, A Phantom Go Program.
🔶 Phantom Tic-Tac-Toe 2 Tic-tac-toe, except the opponent's tokens are hidden. Designed as a simple, imperfect-information game. References: Auger '11, Multiple Tree for Partially Observable Monte-Carlo Tree Search. Lisy '14, Alternative Selection Functions for Information Set Monte Carlo Tree Search. Lanctot '13.
🟢 Pig 2-10 Each player rolls a dice until they get a 1 or they 'hold'; the rolled total is added to their score.
🟢 Prisoner's Dilemma 2 Players decide on whether to cooperate or defect given a situation with different payoffs.
🔶 Poker (Hold 'em) 2-10 Players bet on whether their hand of cards plus some communal cards will form a special set. Implemented via ACPC.
❌ (#1158) Quoridor 2-4 Each turn, players can either move their agent or add a small wall to the board.
❌ (#811) Reconnaissance Blind Chess 2 Chess with opponent's pieces unknown, with sensing moves. Chess variant, invented by John Hopkins University Applied Physics Lab. Used in NeurIPS competition and Hidden Information Game Competition. References: Markowitz et al. '18, On the Complexity of Reconnaissance Blind Chess. Newman et al. '16, Reconnaissance blind multi-chess: an experimentation platform for ISR sensor fusion and resource management.
🟢 Routing game 1+ Players choose at each node where they go. They have an origin, a destination and a departure time and choose their route to minimize their travel time. Time spent on each link is a function of the number of players on the link when the player reaches the link. References: Cabannes et. al. '21, Solving N-player dynamic routing games with congestion: a mean field approach.
🔶 Sheriff 2 Bargaining game. Good for correlated equilibria. Based on the board game Sheriff of Nottingham. References: Farina et al. '19, Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks.
🔶 Slovenian Tarok 3-4 Trick-based card game with bidding. References: Luštrek et al. 2003, A program for playing Tarok.
🔶 Skat (simplified bidding) 3 Each turn, players bid to compete against the other two players.
🔶 Solitaire (K+) 1 A single-player card game. References: Bjarnason et al. '07, Searching solitaire in real time.
🔶 Spades 4 A four-player card game.
🔶 Team Dominoes 4 Team version of dominoes. Consists of 28 tiles, featuring all combinations of spot counts (also called pips or dots) between zero and six.
🟢 Tic-Tac-Toe 2 Players place tokens to try and form a pattern.
🟢 Tiny Bridge 2,4 Simplified Bridge with fewer cards and tricks.
🟢 Tiny Hanabi 2-10 Simplified Hanabi with just two turns. References: Foerster et al 2018, Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning.
🟢 Trade Comm 2 Players with different utilities and items communicate and then trade.
🔶 TwixT 2 Players place pegs and links on a 24x24 square to connect a line between opposite sides.
🔶 Ultimate Tic-Tac-Toe 2 Players try and form a pattern in local boards and a meta-board.
🔶 Weighted Voting Games 1+ Classic coalitional game. Players each have a weight w_i, and there is a quota q. Denote p the binary vector representing a coalition over n players. The utility is 1 if p · w ≥ q, 0 otherwise. References: Chalkiadakis, Elkind, & Wooldridge '12.
🟢 Y 2 Players place tokens to try and connect sides of a triangular board.