Skip to content
This repository has been archived by the owner on Aug 16, 2024. It is now read-only.

Refactoring Entry Points of m-LoRA #83

Merged
merged 1 commit into from
Jul 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 2 additions & 9 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -166,15 +166,8 @@ cython_debug/
__pycache__/
*.egg-info/
*.egg

data/*
!data/AlpacaDataCleaned/
template/*
!data/data_demo.json
!data/dummy_data.json
!template/test_data_demo.json
!template/template_demo.json
data_train.json
mlora.json
mlora_train_*.json

# macOS junk files
.DS_Store
Expand Down
1 change: 0 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ RUN . ~/.bashrc \
&& cd /mLoRA \
&& pyenv virtualenv $PYTHON_VERSION mlora \
&& pyenv local mlora \
&& pip install torch==2.3.1 \
&& pip install -r ./requirements.txt

WORKDIR /mLoRA
Expand Down
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,13 +121,15 @@ You can conveniently utilize m-LoRA via `launch.py`. The following example demon

```bash
# Generating configuration
python launch.py gen --template lora --tasks ./data/dummy_data.json
python launch.py gen --template lora --tasks ./tests/dummy_data.json

# Running the training task
python launch.py run --base_model TinyLlama/TinyLlama_v1.1

# Try with gradio web ui
python inference.py \
--base_model TinyLlama/TinyLlama_v1.1 \
--template ./template/alpaca.json \
--template alpaca \
--lora_weights ./casual_0
```

Expand All @@ -140,15 +142,21 @@ python launch.py help
## m-LoRA

The `mlora.py` code is a starting point for finetuning on various datasets.

Basic command for finetuning a baseline model on the [Alpaca Cleaned](https://github.com/gururise/AlpacaDataCleaned) dataset:
```bash
# Generating configuration
python launch.py gen \
--template lora \
--tasks yahma/alpaca-cleaned

python mlora.py \
--base_model meta-llama/Llama-2-7b-hf \
--config ./config/alpaca.json \
--config mlora.json \
--bf16
```

You can check the template finetune configuration in [template](./template/) folder.
You can check the template finetune configuration in [templates](./templates/) folder.

For further detailed usage information, please use `--help` option:
```bash
Expand Down
59 changes: 0 additions & 59 deletions config/alpaca.json

This file was deleted.

89 changes: 0 additions & 89 deletions config/alpaca_mixlora.json

This file was deleted.

59 changes: 0 additions & 59 deletions config/dummy.json

This file was deleted.

57 changes: 0 additions & 57 deletions config/dummy_glm.json

This file was deleted.

Loading
Loading