A comprehensive machine learning pipeline to analyze and interpret gravitational lensing data for dark matter research. Utilized state-of-the-art deep learning architectures to perform tasks ranging from classification and lens detection to mass prediction and image super-resolution.
➡️ Click Here ⬅️ to access all the data including the trained models for all modules.
Everything is built in Keras and Tensorflow.
Summary:
1) Image Super-Resolution: Employ various techniques like SuperResCNN, EDSR, LapSRN, and ESRGAN to enhance the resolution of lensing images.
2) Classify Gravitational Lensing Data: To categorize various types of lensing phenomena using multiple architectures like AttentionCNN, Vision Transformer (ViT), and ResNet50.
3) Lens Detection: Utilize an AttentionCNN model to identify the presence of gravitational lenses in the given datasets.
4) Regression Mass Prediction: Employ Equivariant Transformers to predict the mass of dark matter involved in gravitational lensing events.
5) Advanced Classification: Utilize Equivariant Neural Networks for more nuanced and rotationally invariant classification tasks.
6) Vision Transformer Implementation: Standalone implementation of the Vision Transformer model suited for gravitational lensing data.
7) Self-Supervised Learning
- Module 1: Image SuperResolution
Approach | MSE | SSIM | PSNR |
---|---|---|---|
SuperResCNN (Super-Resolution Convolutional Neural Network)Notebook: .ipynb Establish a baseline model for performance analysis to guide improvement direction (e.g., residual blocks, self-attention, or GAN architecture). Begin with SuperResCNN, an upsampling layer and three-layer neural network for mapping low-resolution to high-resolution images. |
0.000065 | 0.99168 | 41.780569 |
EDSR (Enhanced Deep Residual Networks)Notebook: .ipynb Residual Blocks to capture more complex image features |
0.000298 | 0.987563 | 36.769835 |
LapSRN (Laplacian Pyramid Super-Resolution Network)Notebook: .ipynb LapSRN, preserves details with an Add() layer in the residual_block function, improving memory efficiency and speeding up inference, while reducing blur and sharpening the image. |
0.004762 | 0.509009 | 22.244892 |
ESRGAN (Enhanced Super-Resolution Generative Adversarial Networks) Notebook: .ipynb Generative Adversarial Networks can combat mode collapse using loss functions like perceptual loss, which leverages VGG19 and sub-pixel convolution for high-resolution image generation. Residual Dense Blocks, batch normalization, and other techniques help stabilize and improve training for visually accurate results. |
0.000968 | 0.967625 | 27.573939 |
- Module 2: Multi-Label Classification (Results get better over time)
Approaches | Val AUC | Confusion Matrix and ROC plot |
---|---|---|
Channelwise Attention CNN Notebook: .ipynb This approach involves using a CNN with two branches, each containing a channelwise attention mechanism to refine learned features. |
0.80 | |
Vision Transformer (Custom) Notebook: .ipynb This approach involves a self-attention Vision Transformer whoes architecture implemented from scratch and then imagenet pretrained weights are applied to it. The model processes image patches through 12 transformer blocks with multi-head self-attention and MLP, then outputs class probabilities. |
0.90 | |
ResNet50 Transfer Learning Notebook: .ipynb Utilizing ResNet-50 for transfer learning, we remove its classification head, apply batch normalization, dropout, and a dense layer with softmax activation for 3-class probability output. This implementation is simplified using existing libraries for the model's architecture. |
0.98 |
- Module 3: Lens Finding
Approach | Val AUC | Confusion Matrix and ROC plot |
---|---|---|
Self-Attention-CNNs Notebook: .ipynb A multimodal model using CNNs and attention mechanisms to process images and features. The model combines the image and feature branches, applies self attention,and outputs a probability through Dense layers. |
0.99 |
- Module 4: Learning Mass of Dark Matter Halo
Approach | MSE |
---|---|
Representational Learning Transformers Notebook: .ipynb Transformers use custom RotationalConv2D layers and contrastive loss to learn equivariant representations, improving performance on tasks involving image augmentations like rotations. The model is pre-trained with ResNet50 weights and fine-tuned for specific regression tasks. |
2.28 x 10^-4 |
- Module 5: Exploring Equivariant Neural Networks
Approach | Val AUC | Confusion Matrix and ROC plot |
---|---|---|
Self Supervised Equivariant Transformers Notebook: .ipynb Equivariant Transformers use custom RotationalConv2D layers and ResNet50 transfer learning to maintain equivariance for input rotations. Contrastive loss guides embeddings, followed by fine-tuning for classification tasks. |
0.99 |
- Module 6: Exploring Vision Transformers
Approach | Val AUC | Confusion Matrix and ROC plot |
---|---|---|
Vision Transformers Notebook: .ipynb (Self-Written, inspired by vit-keras which is not maintained since 2021). Uses self-attention mechanisms. We follow detailed steps, including 2D Conv layer, token flattening, positional embeddings, and transformer blocks, to implement the model and apply pretrained 'npz' weights for prediction. |
0.99 |
- Module 7: Self-Supervised Learning
Approaches | Metrics | Confusion Matrix and ROC plot |
---|---|---|
Classification-Self_Supervised Notebook: .ipynb Equivariant Transformers use custom RotationalConv2D layers and ResNet50 transfer learning to maintain equivariance for input rotations. Contrastive loss guides embeddings, followed by fine-tuning for classification tasks. |
0.99 AUC | |
Regression-Self_Supervised Notebook: .ipynb Transformers use custom RotationalConv2D layers and contrastive loss to learn equivariant representations, improving performance on tasks involving image augmentations like rotations. The model is pre-trained with ResNet50 weights and fine-tuned for specific regression tasks. |
2.28 x 10^-4 MSE |