Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run finetuning.py error: TypeError: Invalid function argument. Expected parameter tensor of type torch.Tensor, but got <class 'float'> instead. #520

Closed
1 of 2 tasks
winca opened this issue May 17, 2024 · 8 comments
Assignees

Comments

@winca
Copy link

winca commented May 17, 2024

System Info

pip list |grep -i -E 'cuda|torch'
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
torch 2.3.0

GPU info: 8 x H100 SMX 80G

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

cmdline:
torchrun --nnodes 1 --nproc_per_node 8 --rdzv-id=111223 --rdzv-backend=c10d --rdzv-endpoint=10.0.1.3:12341 recipes/finetuning/finetuning.py --enable_fsdp --dataset alpaca_dataset --model_name Llama3/Meta-Llama-3-8B-Instruct-hg --use_peft --peft_method lora --output_dir PEFT_model

run it error.

Error logs

some error output:
[rank6]: Traceback (most recent call last):
[rank6]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/recipes/finetuning/finetuning.py", line 8, in
[rank6]: fire.Fire(main)
[rank6]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 143, in Fire
[rank6]: component_trace = _Fire(component, args, parsed_flag_args, context, name)
[rank6]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 477, in _Fire
[rank6]: component, remaining_args = _CallAndUpdateTrace(
[rank6]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
[rank6]: component = fn(*varargs, **kwargs)
[rank6]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/finetuning.py", line 268, in main
[rank6]: results = train(
[rank6]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/utils/train_utils.py", line 224, in train
[rank6]: eval_ppl, eval_epoch_loss, temp_val_loss, temp_step_perplexity = evaluation(model, train_config, eval_dataloader, local_rank, tokenizer, wandb_run)
[rank6]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/utils/train_utils.py", line 372, in evaluation
[rank6]: dist.all_reduce(eval_loss, op=dist.ReduceOp.SUM)
[rank6]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[rank6]: return func(*args, **kwargs)
[rank6]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2195, in all_reduce
[rank6]: _check_single_tensor(tensor, "tensor")
[rank6]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 863, in _check_single_tensor
[rank6]: raise TypeError(
[rank6]: TypeError: Invalid function argument. Expected parameter tensor of type torch.Tensor
[rank6]: but got <class 'float'> instead.
[rank1]: Traceback (most recent call last):
[rank1]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/recipes/finetuning/finetuning.py", line 8, in
[rank1]: fire.Fire(main)
[rank1]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 143, in Fire
[rank1]: component_trace = _Fire(component, args, parsed_flag_args, context, name)
[rank1]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 477, in _Fire
[rank1]: component, remaining_args = _CallAndUpdateTrace(
[rank1]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
[rank1]: component = fn(*varargs, **kwargs)
[rank1]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/finetuning.py", line 268, in main
[rank1]: results = train(
[rank1]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/utils/train_utils.py", line 224, in train
[rank1]: eval_ppl, eval_epoch_loss, temp_val_loss, temp_step_perplexity = evaluation(model, train_config, eval_dataloader, local_rank, tokenizer, wandb_run)
[rank1]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/utils/train_utils.py", line 372, in evaluation
[rank1]: dist.all_reduce(eval_loss, op=dist.ReduceOp.SUM)
[rank1]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2195, in all_reduce
[rank1]: _check_single_tensor(tensor, "tensor")
[rank1]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 863, in _check_single_tensor
[rank1]: raise TypeError(
[rank1]: TypeError: Invalid function argument. Expected parameter tensor of type torch.Tensor
[rank1]: but got <class 'float'> instead.

Expected behavior

normally finished.

@wukaixingxp
Copy link
Contributor

Hi! I can not to reproduce the error, on our H100 machine, I can run torchrun --rdzv-endpoint=localhost:0 --rdzv-id=111223 --nnodes 1 --nproc_per_node 8 --rdzv-backend=c10d recipes/finetuning/finetuning.py --enable_fsdp --dataset alpaca_dataset --model_name meta-llama/Meta-Llama-3-8B --use_peft --peft_method lora --output_dir PEFT_model without any error. I noticed that this error is from all_reduce, can you try --rdzv-endpoint=localhost:0 to see if the error is still there? My env:
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
torch 2.3.0
torchaudio 2.2.2
torchdata 0.7.1
torchtext 0.16.2
torchvision 0.17.2

@winca
Copy link
Author

winca commented May 20, 2024

Hi! I can not to reproduce the error, on our H100 machine, I can run torchrun --rdzv-endpoint=localhost:0 --rdzv-id=111223 --nnodes 1 --nproc_per_node 8 --rdzv-backend=c10d recipes/finetuning/finetuning.py --enable_fsdp --dataset alpaca_dataset --model_name meta-llama/Meta-Llama-3-8B --use_peft --peft_method lora --output_dir PEFT_model without any error. I noticed that this error is from all_reduce, can you try --rdzv-endpoint=localhost:0 to see if the error is still there? My env: nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 torch 2.3.0 torchaudio 2.2.2 torchdata 0.7.1 torchtext 0.16.2 torchvision 0.17.2

Thank you for your response, but the same error still occurs after running.

@wukaixingxp
Copy link
Contributor

Hi! I noticed that your model is also different, can you try with this commend torchrun --rdzv-endpoint=localhost:0 --rdzv-id=111223 --nnodes 1 --nproc_per_node 8 --rdzv-backend=c10d recipes/finetuning/finetuning.py --enable_fsdp --dataset alpaca_dataset --model_name meta-llama/Meta-Llama-3-8B --use_peft --peft_method lora --output_dir PEFT_model and see if error is still here? I also wonder what is the output of python -m torch.utils.collect_env on your system?

@winca
Copy link
Author

winca commented May 21, 2024

Just the origial "meta-llama/Meta-Llama-3-8B-Instruct" or "meta-llama/Meta-Llama-3-8B" can not be used, with the message "Meta-Llama-3-8B does not appear to have a file named config.json" .
"Llama3/Meta-Llama-3-8B-Instruct-hg" is transformerd as hugging-face format from "meta-llama/Meta-Llama-3-8B-Instruct". I also test "Llama3/Meta-Llama-3-8B-hg" which tranformerd as hugging-face format from "meta-llama/Meta-Llama-3-8B", Just the same error.

python -m torch.utils.collect_env output:

/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/runpy.py:126: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35

Python version: 3.10.0 | packaged by conda-forge | (default, Nov 20 2021, 02:24:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3

Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.3.0
[pip3] triton==2.3.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.3.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi

Hi! I noticed that your model is also different, can you try with this commend torchrun --rdzv-endpoint=localhost:0 --rdzv-id=111223 --nnodes 1 --nproc_per_node 8 --rdzv-backend=c10d recipes/finetuning/finetuning.py --enable_fsdp --dataset alpaca_dataset --model_name meta-llama/Meta-Llama-3-8B --use_peft --peft_method lora --output_dir PEFT_model and see if error is still here? I also wonder what is the output of python -m torch.utils.collect_env on your system?

@wukaixingxp
Copy link
Contributor

I encountered the "8B does not appear to have a file named config.json" before and I think I solved it by a completely reinstall on the latest main: pip install -e . . Our pip package llama-recipes v0.0.1 was a little outdated, I recommend you either pull latest main and build from source, or upgrade your llama-recipes package to 0.0.2 which was just released last Friday. Then please rerun your job with meta-llama/Meta-Llama-3-8B and see if everything works now.

@wukaixingxp wukaixingxp self-assigned this May 22, 2024
@winca
Copy link
Author

winca commented May 22, 2024

I encountered the "8B does not appear to have a file named config.json" before and I think I solved it by a completely reinstall on the latest main: pip install -e . . Our pip package llama-recipes v0.0.1 was a little outdated, I recommend you either pull latest main and build from source, or upgrade your llama-recipes package to 0.0.2 which was just released last Friday. Then please rerun your job with meta-llama/Meta-Llama-3-8B and see if everything works now.

I tried it, but the same error occurred again at the beginning.

[rank7]: Traceback (most recent call last):
[rank7]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/recipes/finetuning/finetuning.py", line 8, in
[rank7]: fire.Fire(main)
[rank7]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 143, in Fire
[rank7]: component_trace = _Fire(component, args, parsed_flag_args, context, name)
[rank7]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 477, in _Fire
[rank7]: component, remaining_args = _CallAndUpdateTrace(
[rank7]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/fire/core.py", line 693, in _CallAndUpdateTrace
[rank7]: component = fn(*varargs, **kwargs)
[rank7]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/finetuning.py", line 267, in main
[rank7]: results = train(
[rank7]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/utils/train_utils.py", line 224, in train
[rank7]: eval_ppl, eval_epoch_loss, temp_val_loss, temp_step_perplexity = evaluation(model, train_config, eval_dataloader, local_rank, tokenizer, wandb_run)
[rank7]: File "/ssd/llm_chinahpc/Llama3/llama-recipes/src/llama_recipes/utils/train_utils.py", line 374, in evaluation
[rank7]: dist.all_reduce(eval_loss, op=dist.ReduceOp.SUM)
[rank7]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[rank7]: return func(*args, **kwargs)
[rank7]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2195, in all_reduce
[rank7]: _check_single_tensor(tensor, "tensor")
[rank7]: File "/ssd/llm_chinahpc/software/anaconda3_2024.02/envs/llama3-recipes/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 863, in _check_single_tensor
[rank7]: raise TypeError(
[rank7]: TypeError: Invalid function argument. Expected parameter tensor of type torch.Tensor
[rank7]: but got <class 'float'> instead.

@wukaixingxp
Copy link
Contributor

wukaixingxp commented May 22, 2024

I noticed that the error is from evaluation function and I can reproduce your error now. The problem is the the eval for loop never got entered as len(eval_dataloader) = 0. Taking closer look, the length of dataset_val became 4 after ConcatDataset() which is too small for even one eval step on 8GPU. The temp solution is to change the eval set length to a bigger num like 1000 instead of 200 here. Remember to change both in line 30 and 32. I will talk to the team about how we can prevent this by adding some warning system.

@winca
Copy link
Author

winca commented May 28, 2024

I noticed that the error is from evaluation function and I can reproduce your error now. The problem is the the eval for loop never got entered as len(eval_dataloader) = 0. Taking closer look, the length of dataset_val became 4 after ConcatDataset() which is too small for even one eval step on 8GPU. The temp solution is to change the eval set length to a bigger num like 1000 instead of 200 here. Remember to change both in line 30 and 32. I will talk to the team about how we can prevent this by adding some warning system.

I got it, Thank you very much !

@winca winca closed this as completed May 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants