Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix lora tutorial import issue #10626

Closed
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 50 additions & 42 deletions tutorials/nlp/lora.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,32 +2,36 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"Currently, this notebook must be run in a NeMo container.\n",
"An example command to launch the container:\n",
"```bash\n",
"docker run --gpus all -it --rm -v <your_nemo_dir>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 <your_nemo_container>\n",
"```"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Update megatron version to the newest.\n",
"!cd /workspace && python -m pip install -e git+https://github.com/NVIDIA/Megatron-LM#egg=megatron-core"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b6d57a70",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"%cd /NeMo/tutorials/nlp\n",
Expand All @@ -36,10 +40,7 @@
"import wget\n",
"import sys\n",
"sys.path.insert(0, \"../..\") # find the local nemo first before the installed nemo"
],
"metadata": {
"collapsed": false
}
]
},
{
"attachments": {},
Expand Down Expand Up @@ -325,13 +326,14 @@
{
"cell_type": "code",
"execution_count": null,
"id": "641969e7",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"!wget -nc --content-disposition {megatron_gpt_345m_nemo_url} -O {NEMO_DIR}/{gpt_file_name}"
],
"metadata": {
"collapsed": false
}
]
},
{
"attachments": {},
Expand Down Expand Up @@ -537,17 +539,18 @@
},
{
"cell_type": "markdown",
"id": "618a0a5a",
"metadata": {
"collapsed": false
},
"source": [
"Simply substitute with the `MegatronT5SFTModel` class to use T5 instead of GPT.\n",
"\n",
"To use a different PEFT method, you can use a different config class in place of `LoraPEFTConfig`, such as `CanonicalAdaptersPEFTConfig`, `IA3PEFTConfig`, `PtuningPEFTConfig`. You can also use a combination of the methods by passing in a list:\n",
"`model.add_adapter([LoraPEFTConfig(model_cfg), PtuningPEFTConfig(model_cfg)])`\n",
"\n",
"We're now ready to start training."
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -597,6 +600,10 @@
{
"cell_type": "code",
"execution_count": null,
"id": "fe1c31b6",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# reimport libraries and classes in case one wants to only run cells from the Inference section\n",
Expand All @@ -612,31 +619,30 @@
"DATA_DIR = \"data\"\n",
"CONFIG_DIR = os.path.join(NEMO_DIR, \"conf\")\n",
"SQUAD_DIR = os.path.join(DATA_DIR, \"SQuAD\")\n"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "markdown",
"source": [
"First, we will load and modify a config file that will be used for inference.\n"
],
"id": "6021ec86",
"metadata": {
"collapsed": false
}
},
"source": [
"First, we will load and modify a config file that will be used for inference.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11908c87",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Download the example config file\n",
"wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/examples/nlp/language_modeling/tuning/conf/megatron_gpt_generate_config.yaml', CONFIG_DIR)"
],
"metadata": {
"collapsed": false
}
]
},
{
"cell_type": "code",
Expand Down Expand Up @@ -711,30 +717,32 @@
},
{
"cell_type": "markdown",
"source": [
"The cell below is required if you are running the notebook end-to-end, and if you use a different batch size for training and evaluation. In this case, the microbatch calculator needs to be rest. If you are running training only or inference only, feel free to ignore this cell."
],
"id": "f02e403e",
"metadata": {
"collapsed": false
}
},
"source": [
"The cell below is required if you are running the notebook end-to-end, and if you use a different batch size for training and evaluation. In this case, the microbatch calculator needs to be rest. If you are running training only or inference only, feel free to ignore this cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "96e1a150",
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from nemo.utils.apex_utils import _reconfigure_microbatch_calculator\n",
"_reconfigure_microbatch_calculator(\n",
"from megatron.core.num_microbatches_calculator import reconfigure_num_microbatches_calculator\n",
"reconfigure_num_microbatches_calculator(\n",
" rank=0,\n",
" rampup_batch_size=None,\n",
" global_batch_size=config_eval.model.global_batch_size,\n",
" micro_batch_size=config_eval.model.micro_batch_size,\n",
" data_parallel_size=1,\n",
")"
],
"metadata": {
"collapsed": false
}
]
},
{
"attachments": {},
Expand Down
Loading