Welcome to this repository, which is dedicated to exploring the application of various Reinforcement Learning (RL) algorithms for optimizing environmental conditions in greenhouses. The project implements Proximal Policy Optimization (PPO), Asynchronous Advantage Actor-Critic (A2C), and Soft Actor-Critic (SAC) algorithms in custom-designed simulation environments.
The primary objective of this project is to understand the behavior and performance of different RL algorithms in the complex and dynamic environment of a greenhouse. We've implemented both discrete and continuous action spaces and have rigorously compared these algorithms in various conditions like water level control, heat regulation, and nutrient assimilation.
-
Custom Environments: The repository includes code for custom environments that simulate different aspects of greenhouse control, such as water levels and temperature.
-
Multiple Algorithms: The project investigates PPO, A2C, and SAC, adapting them to both discrete and continuous action spaces.
-
Comprehensive Analysis: Alongside code, the repository contains a thorough report detailing the methodologies, results, and implications of the experiments.
-
Clone the Repository:
git clone https://github.com/fayelhassan/AI-for-Sustainable-Agriculture.git
-
Navigate to the Project Directory:
cd AI-for-Sustainable-Agriculture
-
Install Dependencies:
pip install -r requirements.txt
Each Jupyter Notebook corresponds to a different experiment setup and can be run independently for generating specific results.
-
Open Jupyter Notebook:
jupyter notebook
-
Navigate to the Notebook:
Choose the notebook corresponding to the algorithm and environment you are interested in.
-
Run the Notebook:
Execute all cells in the notebook.
For a deeper understanding of the algorithms, their hyperparameter settings, and the custom environments, please refer to the accompanying project report. The report offers a comprehensive breakdown of the methodologies used, results obtained, and insights gained from this study.
-
Algorithms and Environments: Our report provides a detailed explanation of each algorithm and custom environment used in the project.
-
Results: For a complete breakdown of how each algorithm performed in different environments, consult the Results section of the report.
Feel free to fork the repository and submit pull requests. For bugs, questions, and discussions, please use the GitHub Issues.
This project is licensed under the MIT License - see the LICENSE.md file for details.