Welcome to the Transformer Model Pytorch repository! This project showcases a custom implementation of the Transformer architecture using PyTorch. Dive into sequence-to-sequence learning with one of the most influential models in natural language processing.
The Transformer model unveiled in the revolutionary paper "Attention is All You Need" by Vaswani et al., has transformed the landscape of NLP. Ditching traditional recurrent architectures, it relies on powerful self-attention mechanisms to excel in various tasks. This repository focuses on utilizing the Transformer for English to Hindi translation, demonstrating its prowess in handling complex linguistic structures across different languages. Dive into the core of Transformer architecture, exploring its encoder-decoder framework and how it processes language pairs effectively.
- π₯ Pure PyTorch Implementation: Delve deep into the Transformer's intricacies with a from-scratch implementation that lets you explore every layer, every neuron.
- π οΈ Modular Design: Tinker with key components like multi-head self-attention and positional encoding. Our design lets you adapt and expand parts effortlessly.
- π Comprehensive Training and Evaluation Scripts: Jump right into training with pre-written scripts, making it easy to start translating between English and Hindi or assess your modelβs performance.
- ποΈ Visualization Tools: Get a graphical view of what's happening under the hood. Our tools let you watch the attention mechanisms at work and monitor training progress in real-time.
- π Bilingual Tokenization Support: Tailored for the nuances of English and Hindi, ensuring accurate and effective handling of linguistic elements unique to both languages.
Before you begin, ensure you have the following installed:
- Python 3.7 or higher
- PyTorch 1.8.0 or higher
- NumPy
- Matplotlib (optional, for visualization)
- Altair (for advanced visualizations)
-
Clone the repository:
git clone https://github.com/yourusername/transformer-from-scratch.git cd transformer-from-scratch
-
Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install the required packages:
pip install -r requirements.txt
Our Transformer model includes the following key components:
-
π€ Input Embeddings: These transform token indices into dense vectors, facilitating the model's understanding of textual input.
-
π Positional Encoding: By imbuing embeddings with positional information, this component ensures the model comprehends the sequential order of tokens within the input.
-
π Multi-Head Self-Attention: Enabling nuanced focus across diverse segments of the input sequence, this mechanism enhances the model's ability to discern context.
-
π₯ Feed-Forward Neural Network: Infusing the model with non-linearity and intricacy, this network contributes to its capacity for sophisticated computations.
-
π Encoder and Decoder Layers: Constituting stacks of attention and feed-forward layers, these elements play a pivotal role in constructing intricate representations of input data.
-
π― Output Linear Layer: Responsible for the final stage of transformation, this layer maps the decoder's output to match the size of the target vocabulary, ensuring alignment with desired linguistic outcomes.
To better understand the model's inner workings, use our visualization tools to inspect attention weights and training metrics.
Visualize the self-attention mechanisms in the encoder layers:
Visualize the self-attention mechanisms in the decoder layers:
Visualize the attention mechanisms between the encoder and decoder layers:
- Training Loss:
This project draws inspiration from the original Transformer paper and various open-source implementations. We thank the PyTorch community for their comprehensive resources and tutorials.
This project is licensed under the MIT License. See the LICENSE file for more details.