This package provides an interactive Streamlit app to visualize the gradient flow of a simple neural network during training. You can adjust the network's parameters and see real-time updates of the loss and gradient magnitudes.
- Neural Network Settings: Configure the number of layers, hidden size, activation function, and learning rate.
- Live Gradient Flow: Visualize the gradient norms across different layers during training.
- Real-time Feedback: See the loss curve and how gradients change over time as you adjust parameters.
Clone the repository and install the required dependencies:
git clone https://github.com/yourusername/gradient-visualizer.git
cd gradient-visualizer
pip install -r requirements.txt
An interactive Streamlit application for visualizing gradient flow in neural networks during training. This tool allows researchers and practitioners to gain insights into the behavior of gradients across different network architectures and hyperparameters.
- Customizable Neural Network Settings: Configure network depth, width, activation functions, and learning rates.
- Real-time Gradient Flow Visualization: Observe gradient norms across different layers during training.
- Performance Metrics: Track loss curves and various performance metrics as you adjust parameters.
- Support for Dense and Sparse Networks: Compare gradient flow between dense and sparse architectures.
- Multiple Datasets: Choose from a variety of built-in datasets for experimentation.
- Advanced Training Options: Implement gradient clipping and normalization techniques.
-
Clone the repository:
git clone https://github.com/yourusername/gradient-flow-visualizer.git cd gradient-flow-visualizer
-
Create and activate a virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
To run the Gradient Flow Visualizer:
streamlit run src/app.py
Navigate to the provided local URL in your web browser to interact with the application.
The Gradient Flow Visualizer leverages Streamlit for its interactive interface and PyTorch for neural network training. The main components include:
- Neural Network Models: Implemented in
src/model/model.py
andsrc/model/sparse_model.py
. - Training Loop: Defined in
src/training/training.py
. - Data Loading: Handled by
src/data/dataloader.py
. - Visualization: Powered by
src/visualizer/visualizer.py
.
The application allows users to configure network parameters, select datasets, and visualize gradient flow in real-time during training.
The Gradient Flow Visualizer provides several key visualizations:
- Neural Network Architecture: Visual representation of the network structure.
- Metrics History: Plots of various performance metrics over training epochs.
- Gradient Flow Distribution: Box plots showing the distribution of gradient norms across layers.
- Gradient Flow Over Time: Line plots depicting how gradient norms change during training.
- Gradient Flow Statistics: Detailed statistics on gradient behavior for each layer.
These visualizations help identify issues like vanishing or exploding gradients and assess the overall health of gradient flow in the network.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
For any questions or feedback, please open an issue in the GitHub repository or contact the maintainer at cristianleo120@gmail.com.
Happy visualizing! 🎨📈