Skip to content

Commit

Permalink
Merge pull request #85 from shinning0821/main
Browse files Browse the repository at this point in the history
Update LoRa
  • Loading branch information
shinning0821 authored Jan 21, 2024
2 parents def01b4 + f290256 commit fc93e91
Show file tree
Hide file tree
Showing 45 changed files with 1,885 additions and 382 deletions.
Binary file modified __pycache__/function.cpython-37.pyc
Binary file not shown.
Binary file modified conf/__pycache__/global_settings.cpython-37.pyc
Binary file not shown.
2 changes: 1 addition & 1 deletion conf/global_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
CHECKPOINT_PATH = 'checkpoint'

#total training epoches
EPOCH = 50
EPOCH = 100
step_size = 10
i = 1
MILESTONES = []
Expand Down
Binary file added figs/lora/MobileSAM-Ti (AdaLora)_loss.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added figs/lora/MobileSAM-Ti (AdaLora)_performance.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added figs/lora/MobileSAM-Ti (Adapter)_loss.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added figs/lora/MobileSAM-Ti (Adapter)_performance.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added figs/lora/MobileSAM-Ti (Lora)_loss.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added figs/lora/MobileSAM-Ti (Lora)_performance.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added figs/lora/lora.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
21 changes: 19 additions & 2 deletions function.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,7 @@ def train_sam(args, net: nn.Module, optimizer, train_loader,
true_mask_ave = (true_mask_ave > 0.5).float()
#true_mask_ave = cons_tensor(true_mask_ave)
# imgs = imgs.to(dtype = mask_type,device = GPUdevice)


'''Train'''
if args.mod == 'sam_adpt':
Expand All @@ -130,6 +131,16 @@ def train_sam(args, net: nn.Module, optimizer, train_loader,
value.requires_grad = False
else:
value.requires_grad = True
elif args.mod == 'sam_lora' or args.mod == 'sam_adalora':
from models.common import loralib as lora
lora.mark_only_lora_as_trainable(net.image_encoder)
if args.mod == 'sam_adalora':
# Initialize the RankAllocator
rankallocator = lora.RankAllocator(
net.image_encoder, lora_r=4, target_rank=8,
init_warmup=500, final_warmup=1500, mask_interval=10,
total_step=3000, beta1=0.85, beta2=0.85,
)
else:
for n, value in net.image_encoder.named_parameters():
value.requires_grad = True
Expand Down Expand Up @@ -180,10 +191,16 @@ def train_sam(args, net: nn.Module, optimizer, train_loader,

pbar.set_postfix(**{'loss (batch)': loss.item()})
epoch_loss += loss.item()
loss.backward()

# nn.utils.clip_grad_value_(net.parameters(), 0.1)
optimizer.step()
if args.mod == 'sam_adalora':
(loss+lora.compute_orth_regu(net, regu_weight=0.1)).backward()
optimizer.step()
rankallocator.update_and_mask(net, ind)
else:
loss.backward()
optimizer.step()

optimizer.zero_grad()

'''vis images'''
Expand Down
108 changes: 108 additions & 0 deletions guidance/lora.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Use Lora for Adaption"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### A Quick Overview\n",
"\n",
"#### LoRa(ICLR2021)\n",
"<img width=\"380\" height=\"380\" src=\"../figs/lora/lora.png\">\n",
"\n",
"Low-Rank Adaptation, or [LoRA](https://github.com/microsoft/LoRA), freezes the pretrained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times.\n",
"\n",
"#### AdaLoRa(ICLR2023)\n",
"[AdaLoRA](https://github.com/QingruZhang/AdaLoRA) adaptively allocates the parameter budget among weight matrices according to their importance score. In particular, AdaLoRA parameterizes the incremental updates in the form of singular value decomposition. Such a novel approach allows for the effective pruning of the singular values of unimportant updates, which is essential to reduce the parameter budget but circumvent intensive exact SVD computations.\n",
"\n",
"### Application in our framework\n",
"For each AttentionBlock in ImageEncoder, we replace the two Linear Layers in Attention and Mlp with LoRa Linear Layer and AdaLoRa Layer, i.e., one example [here](../models/ImageEncoder/vit/lora_block.py)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Training\n",
"In SAM, EfficientSAM, and MobileSAM, the adjustment of models using Lora is supported. The -mod option can be used to specify the fine-tuning method:\n",
"``python train.py -net mobile_sam -dataset REFUGE -data_path data/REFUGE -sam_ckpt checkpoint/mobile_sam/mobile_sam.pt -image_size 256 -vis 100 -exp_name tiny-mobile-isic-256 -encoder tiny_vit -mod sam_lora``\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Performance VS Adapter\n",
"#### REFUGE\n",
"| Baseline | Backbone | mode | DICE | mIou | Memory |\n",
"| ------------ | --------- | ------ | ---- | ------- | ------------ |\n",
"| EfficientSAM | VIT-Small | Adapter | 0.8691 | 0.7915 | 21275 M |\n",
"| EfficientSAM | VIT-Small | Lora | 0.8573 | 0.7703 | 22777 M |\n",
"| EfficientSAM | VIT-Small | AdaLora | 0.8558 | 0.7596 | 22779 M |\n",
"| MobileSAM | Tiny-Vit | Adapter | 0.9330 | 0.8812 | 10255M |\n",
"| MobileSAM | Tiny-Vit | Lora | 0.9107 | 0.8436 | 10401M |\n",
"| MobileSAM | Tiny-Vit | AdaLora | 0.8863 | 0.8031 | 10401M |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Curve of loss and performance\n",
"**based on MobileSAM(Tiny-Vit) model and REFUGE dataset**\n",
"\n",
"#### Adapter\n",
"<p float=\"left\">\n",
" <img src=\"../figs/lora/MobileSAM-Ti (Adapter)_loss.png\" width=\"400\" />\n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
" <img src=\"../figs/lora/MobileSAM-Ti (Adapter)_performance.png\" width=\"400\" /> \n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
"</p>\n",
"\n",
"#### LoRa\n",
"<p float=\"left\">\n",
" <img src=\"../figs/lora/MobileSAM-Ti (Lora)_loss.png\" width=\"400\" />\n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
" <img src=\"../figs/lora/MobileSAM-Ti (Lora)_performance.png\" width=\"400\" /> \n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
"</p>\n",
"\n",
"#### AdaLoRa\n",
"<p float=\"left\">\n",
" <img src=\"../figs/lora/MobileSAM-Ti (AdaLora)_loss.png\" width=\"400\" />\n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
" <img src=\"../figs/lora/MobileSAM-Ti (AdaLora)_performance.png\" width=\"400\" /> \n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
"</p>\n",
"\n",
"It can be seen that the training method using Adapter is more stable."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.7.16 ('general')",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.7.16"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "e7f99538a81e8449c1b1a4a7141984025c678b5d9c33981aa2a3c129d8e1c90d"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
8 changes: 4 additions & 4 deletions guidance/mobile_sam.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -90,17 +90,17 @@
"\n",
"### ISIC\n",
"<p float=\"left\">\n",
" <img src=\"../figs/MobileSAM/MobileSAM-Ti (ISIC)_loss.png\" width=\"200\" />\n",
" <img src=\"../figs/MobileSAM/MobileSAM-Ti (ISIC)_loss.png\" width=\"400\" />\n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
" <img src=\"../figs/MobileSAM/MobileSAM-Ti (ISIC)_performance.png\" width=\"200\" /> \n",
" <img src=\"../figs/MobileSAM/MobileSAM-Ti (ISIC)_performance.png\" width=\"400\" /> \n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
"</p>\n",
"\n",
"### REFUGE\n",
"<p float=\"left\">\n",
" <img src=\"../figs/MobileSAM/MobileSAM-Ti (REFUGE)_loss.png\" width=\"200\" />\n",
" <img src=\"../figs/MobileSAM/MobileSAM-Ti (REFUGE)_loss.png\" width=\"400\" />\n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
" <img src=\"../figs/MobileSAM/MobileSAM-Ti (REFUGE)_performance.png\" width=\"200\" /> \n",
" <img src=\"../figs/MobileSAM/MobileSAM-Ti (REFUGE)_performance.png\" width=\"400\" /> \n",
" &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n",
"</p>"
]
Expand Down
2 changes: 1 addition & 1 deletion models/ImageEncoder/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
from .tinyvit.tiny_vit import TinyViT
from .vit import AdapterBlock, Block
from .vit import AdaloraBlock, AdapterBlock, Block, LoraBlock
Binary file modified models/ImageEncoder/__pycache__/__init__.cpython-37.pyc
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified models/ImageEncoder/tinyvit/__pycache__/tiny_vit.cpython-37.pyc
Binary file not shown.
Loading

0 comments on commit fc93e91

Please sign in to comment.