Skip to content

AI-powered Tetris with reinforcement learning and neural network optimization, built with PyTorch and OpenCV.

Notifications You must be signed in to change notification settings

bkhalil3/NeuroBlocks-

Repository files navigation

NeuroBlock: AI-Powered Tetris via Reinforcement Learning

NeuroBlock embeds the power of reinforcement learning by training an AI to play the classic Tetris game. Built using PyTorch for modeling and OpenCV for visualization, this project demonstrates how AI learns and adapts through gameplay.


Features

  1. AI learns to play Tetris using reinforcement learning.
  2. Visual game interface powered by OpenCV to demonstrate the AI's performance.
  3. Customizable training parameters to fine-tune learning behavior.

Installation

Requirements

  • numpy
  • torch
  • termcolor
  • opencv-python
  • Pillow

To install all dependencies, run:

pip install -r requirements.txt

Usage

  1. Run the Pre-trained Model: View the AI in action with the final trained model:
python run.py

A window will open displaying the AI playing Tetris.

  1. Train the Model: Start training the model from scratch:
python train.py

Training Parameters

You can customize the following parameters in the code to optimize the training process:

Parameter Default Value Description
epochs 30000 Number of training epochs.
epsilon 1 Initial epsilon value for the Epsilon-Greedy Algorithm. Should remain between 1 and the epsilon floor (0.001).
gamma 0.999 Epsilon decay rate. Higher values slow down the decay, while smaller values decrease epsilon more quickly.
replay_size 100000 Capacity of the replay buffer used to store Tetris states for training.
minibatch_size 200 Size of the minibatch sampled from the replay buffer during training.

Visualization

During gameplay, the AI's decisions and movements are displayed in real-time. Below is a sample frame of the Tetris game being played by the AI: Sample frame of Tetris


How It Works

  • Reinforcement Learning: The AI leverages Q-learning with a neural network to maximize its score by placing Tetris efficiently.
  • Epsilon-Greedy Algorithm: Balances exploration and exploitation during gameplay to improve decision-making.
  • Replay Buffer: Stores past game states to train the AI efficiently by replaying critical scenarios.

Future Improvements

  1. Explore additional reinforcement learning strategies for better performance.
  2. Add a user interface to adjust training parameters dynamically.

Feel free to contribute or provide feedback! 🚀

About

AI-powered Tetris with reinforcement learning and neural network optimization, built with PyTorch and OpenCV.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages