============================= test session starts ============================== platform linux -- Python 3.10.8, pytest-8.2.0, pluggy-1.5.0 rootdir: /workspace//peft configfile: pyproject.toml plugins: cov-5.0.0, anyio-4.3.0 collected 5852 items tests/regression/test_regression.py sssssssssssssssssss [ 0%] tests/test_adaption_prompt.py .....x................ [ 0%] tests/test_auto.py ........ [ 0%] tests/test_common_gpu.py sssssssssssssssssssssssssssssss [ 1%] tests/test_config.py ................................................... [ 2%] ........................................................................ [ 3%] ............................... [ 3%] tests/test_custom_models.py ............................................ [ 4%] ........................................................................ [ 5%] ........................................................................ [ 7%] .........ssssssssssssssssssssssssssssssssssssssssssssssssssssss......... [ 8%] ...sssss................................................................ [ 9%] ........................................................................ [ 10%] ........................................................................ [ 12%] ................................................ssssssssssssssssssssssss [ 13%] ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss.... [ 14%] ........................................................................ [ 15%] ........................................................................ [ 17%] ........................................................................ [ 18%] ........................................................................ [ 19%] ........................................................................ [ 20%] ........................................................................ [ 21%] ........................................................................ [ 23%] ........................................................................ [ 24%] ........................................................................ [ 25%] .................................ssssssssssss........................... [ 26%] .....................................ssssssssssssssssssssssssssss....... [ 28%] ........................................................................ [ 29%] ........................................................................ [ 30%] ........................................................................ [ 31%] .....................................................................sss [ 33%] ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss.... [ 34%] ........................................................................ [ 35%] ........................................................................ [ 36%] ........................................................................ [ 37%] ........................................................................ [ 39%] ........................................................................ [ 40%] ........................................................................ [ 41%] .....sssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 42%] ssssssssssssssssssss.................................................... [ 44%] ........................................................................ [ 45%] .....................s [ 45%] tests/test_decoder_models.py ........................................... [ 46%] ....................s.sss..ss.sss..ss.sss.ss.sss..ss.sss..ss.sss..ss.sss [ 47%] ..ss.sss..s............................................................. [ 48%] ....ssss....ssss....ssss...ssss....ssss....ssss....ssss....ssss....ssss. [ 50%] s..ssss.s..sssss..ssss.s..ssss.s..ssss.s..ssss.s..ssss.s................ [ 51%] ........................................................................ [ 52%] ........................................................................ [ 53%] .................................sssss...sssss...sssss...sssss...sssss.. [ 55%] .sssss...sssss...sssss.................................................. [ 56%] ........................................................................ [ 57%] .......sss.s...sss.s.s.ssssss..sss.s...sss.s...sss.s...sss.s...sss.s...s [ 58%] ss.ss..sss.sss.ssssss..sss.ss..sss.ss..sss.ss..sss.ss..sss.ss........... [ 59%] .....s..............................................................s... [ 61%] .s.s........................................s.sssssss.sssssss.sssssss.ss [ 62%] sssss.sssssss.sssssss.sssssss.ssssss.................................... [ 63%] ...........................s.sssssss.sssssss.sssssss.sssssss.sssssss.sss [ 64%] ssss.sssssss.ssssssss.sssssss.sssssss.sssssss.sssssss.sssssss.sssssss.ss [ 66%] sssss.sssss............................................................. [ 67%] ...ssss.sssssss.sssssss.sssssss.sssssss.sssssss.sssssss.sssssss.sss..... [ 68%] ........................................................................ [ 69%] .......................................................s.......s.......s [ 71%] ......s.......s.......s.......s.......s.......s.......s.......s......s.. [ 72%] .....s.......s.......s.......s....sss.....sss.....sss....sss.....sss.... [ 73%] .sss.....sss.....sss.....sss.....sss.....sss....sss.....sss.....sss..... [ 74%] sss.....sss...s.sssssss.sssssss.sssssss.sssssss.sssssss.sssssss.sssssss. [ 75%] ssssssss...sssss...sssss...sssss...sssss...sssss...sssss...sssss...sss.. [ 77%] ............................................................s.sssssss.ss [ 78%] sssss.sssssss.sssssss.sssssss.sssssss.sssssss.ssssss [ 79%] tests/test_encoder_decoder_models.py ................s.sss..ss.sss..s... [ 79%] ...............ssss....ssss....ssss.s..ssss.s...ss......ss.............. [ 81%] ........................sssss...sssss..................................s [ 82%] ss.s...sss.s.s.sssssss.sssssss.sssssss.ssssssss.sssssss.sssss........... [ 83%] ..........................................s.......s.......s.......s....s [ 84%] ss.....sss.....sss.....sss...s.sssssss.ssssssss...sssss...sss........... [ 86%] .....s.sssssss.ssssss. [ 86%] tests/test_feature_extraction_models.py ................................ [ 87%] ..................................ssss....ssss....ssss....ssss....ssss.s [ 88%] ..ssss.s..ssss.s..ssss.s................................................ [ 89%] ..s.......s.......sss.s...sss.s...sss.s...sss.s.........s.sssssss.ssssss [ 90%] s.sssssss.ssssss........................................................ [ 91%] .............s.......s.......s.......s....sss.....ssss....sss.....sss... [ 93%] ..sss.....ssss..sssssss.sssssss.sssssss.sssssss.ssssssss...sssss...sssss [ 94%] ..sssss..sss................................s.sssssss.sssssss.sssssss.ss [ 95%] ssss [ 95%] tests/test_gpu_examples.py sssssssssssssssssssssssssssssssssssssssssssss [ 96%] sssssssssssssssss [ 96%] tests/test_hub_features.py . [ 96%] tests/test_initialization.py .................. [ 97%] tests/test_lora_megatron.py ssss [ 97%] tests/test_low_level_api.py ... [ 97%] tests/test_mixed.py .............................................. [ 97%] tests/test_multitask_prompt_tuning.py ........... [ 98%] tests/test_other.py .... [ 98%] tests/test_poly.py . [ 98%] tests/test_stablediffusion.py ................ [ 98%] tests/test_tuners_utils.py ...........ssssssssss........................ [ 99%] ................................ [ 99%] tests/test_vera.py ......... [100%] =============================== warnings summary =============================== ../../../../../../torch/venv3/pytorch/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28 /torch/venv3/pytorch/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html from pkg_resources import packaging # type: ignore[attr-defined] ../../../../../Megatron-LM/megatron/core/tensor_parallel/cross_entropy.py:73 /workspace/Megatron-LM/megatron/core/tensor_parallel/cross_entropy.py:73: DeprecationWarning: invalid escape sequence '\s' """ tests/test_adaption_prompt.py::AdaptionPromptTester::test_add_and_set_while_disabled /workspace//peft/tests/test_adaption_prompt.py:642: UserWarning:  MLU operators don't support 64-bit calculation. so the 64 bit data will be forcibly converted to 32-bit for calculation.  (Triggered internally at /torch/catch/torch_mlu/csrc/aten/utils/tensor_util.cpp:157.) input_ids = torch.LongTensor([[1, 1, 1], [2, 1, 2]]).to(self.torch_device) tests/test_adaption_prompt.py: 1 warning tests/test_decoder_models.py: 183 warnings tests/test_multitask_prompt_tuning.py: 1 warning tests/test_tuners_utils.py: 5 warnings /workspace//transformers/src/transformers/generation/configuration_utils.py:464: UserWarning: `pad_token_id` should be positive but got -1. This will cause errors when batch generating, if there is padding. Please set `pas_token_id` explicitly by `model.generation_config.pad_token_id=PAD_TOKEN_ID` to avoid errors in generation, and ensure your `input_ids` input does not have negative values. warnings.warn( tests/test_adaption_prompt.py: 3 warnings tests/test_decoder_models.py: 223 warnings tests/test_encoder_decoder_models.py: 36 warnings tests/test_mixed.py: 1 warning tests/test_multitask_prompt_tuning.py: 6 warnings tests/test_poly.py: 1 warning /workspace//transformers/src/transformers/generation/utils.py:1156: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation. warnings.warn( tests/test_adaption_prompt.py::AdaptionPromptTester::test_bf16_inference /workspace//transformers/src/transformers/generation/stopping_criteria.py:495: UserWarning: The operator 'aten::isin.Tensor_Tensor_out' is not currently supported on the MLU backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /torch/catch/torch_mlu/csrc/aten/MLUFallback.cpp:49.) is_done = torch.isin(input_ids[:, -1], self.eos_token_id.to(input_ids.device)) tests/test_adaption_prompt.py: 8 warnings tests/test_multitask_prompt_tuning.py: 6 warnings /workspace//peft/src/peft/utils/save_and_load.py:192: UserWarning: Could not find a config file in - will assume that the vocabulary was not modified. warnings.warn( tests/test_adaption_prompt.py::AdaptionPromptTester::test_save_pretrained_regression /torch/venv3/pytorch/lib/python3.10/site-packages/torch/serialization.py:835: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() if isinstance(obj, torch.storage.TypedStorage) and obj.device.type == 'mlu' \ tests/test_auto.py: 7 warnings tests/test_decoder_models.py: 551 warnings tests/test_encoder_decoder_models.py: 143 warnings tests/test_feature_extraction_models.py: 154 warnings tests/test_mixed.py: 2 warnings tests/test_poly.py: 1 warning tests/test_stablediffusion.py: 21 warnings tests/test_tuners_utils.py: 1 warning /torch/venv3/pytorch/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( tests/test_custom_models.py: 47 warnings tests/test_decoder_models.py: 31 warnings tests/test_tuners_utils.py: 1 warning /workspace//peft/src/peft/tuners/lora/layer.py:1069: UserWarning: fan_in_fan_out is set to False but the target module is `Conv1D`. Setting fan_in_fan_out to True. warnings.warn( tests/test_custom_models.py: 59 warnings tests/test_decoder_models.py: 20 warnings /workspace//peft/src/peft/tuners/ia3/model.py:131: UserWarning: fan_in_fan_out is set to False but the target module is `Conv1D`. Setting fan_in_fan_out to True. warnings.warn( tests/test_custom_models.py: 20 warnings /workspace//peft/src/peft/tuners/ia3/model.py:123: UserWarning: fan_in_fan_out is set to True but the target module is `torch.nn.Linear`. Setting fan_in_fan_out to False. warnings.warn( tests/test_custom_models.py::PeftCustomModelTester::test_active_adapter_70_Vanilla_MLP_1_BOFT /workspace//peft/src/peft/tuners/boft/layer.py:59: UserWarning: Failed to load the CUDA extension: CUDA_HOME environment variable is not set. Please set it to your CUDA install root., check if ninja is available. warnings.warn(f"Failed to load the CUDA extension: {e}, check if ninja is available.") tests/test_custom_models.py: 231 warnings tests/test_decoder_models.py: 197 warnings tests/test_encoder_decoder_models.py: 54 warnings tests/test_feature_extraction_models.py: 64 warnings tests/test_stablediffusion.py: 3 warnings /workspace//peft/src/peft/tuners/boft/layer.py:60: UserWarning: Setting boft_n_butterfly_factor to 1 to speed up the finetuning process. warnings.warn("Setting boft_n_butterfly_factor to 1 to speed up the finetuning process.") tests/test_custom_models.py: 230 warnings tests/test_decoder_models.py: 197 warnings tests/test_encoder_decoder_models.py: 54 warnings tests/test_feature_extraction_models.py: 64 warnings tests/test_stablediffusion.py: 3 warnings /workspace//peft/src/peft/tuners/boft/layer.py:59: UserWarning: Failed to load the CUDA extension: /workspace/volume/zhangxiao1/.cache/torch_extensions/py310_cpu/fbd_cuda/fbd_cuda.so: cannot open shared object file: No such file or directory, check if ninja is available. warnings.warn(f"Failed to load the CUDA extension: {e}, check if ninja is available.") tests/test_custom_models.py: 17 warnings tests/test_decoder_models.py: 18 warnings /workspace//peft/src/peft/tuners/vera/model.py:310: UserWarning: fan_in_fan_out is set to False but the target module is `Conv1D`. Setting fan_in_fan_out to True. warnings.warn( tests/test_custom_models.py::PeftCustomModelTester::test_adapter_name_makes_no_difference_6 /workspace//peft/src/peft/tuners/boft/layer.py:415: UserWarning: The operator 'aten::_linalg_solve_ex.result' is not currently supported on the MLU backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /torch/catch/torch_mlu/csrc/aten/MLUFallback.cpp:49.) Q = torch.linalg.solve(id_mat + skew_mat, id_mat - skew_mat, left=False) tests/test_custom_models.py: 87 warnings tests/test_decoder_models.py: 31 warnings tests/test_encoder_decoder_models.py: 8 warnings tests/test_feature_extraction_models.py: 16 warnings /workspace//peft/src/peft/tuners/tuners_utils.py:598: UserWarning: Adapter delete_me was active which is now deleted. Setting active adapter to default. warnings.warn( tests/test_custom_models.py::PeftCustomModelTester::test_disable_adapters_70_Vanilla_MLP_1_BOFT /torch/venv3/pytorch/lib/python3.10/site-packages/torch/autograd/__init__.py:251: UserWarning: The operator 'aten::linalg_lu_solve.out' is not currently supported on the MLU backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /torch/catch/torch_mlu/csrc/aten/MLUFallback.cpp:49.) Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass tests/test_custom_models.py: 32 warnings tests/test_decoder_models.py: 7 warnings tests/test_encoder_decoder_models.py: 2 warnings tests/test_feature_extraction_models.py: 4 warnings /workspace//peft/src/peft/tuners/ia3/layer.py:140: UserWarning: Unmerge result can be inaccurate for (IA)^3. warnings.warn("Unmerge result can be inaccurate for (IA)^3.") tests/test_custom_models.py: 10 warnings /workspace//peft/src/peft/tuners/ia3/layer.py:264: UserWarning: Unmerge result can be inaccurate for (IA)^3. warnings.warn("Unmerge result can be inaccurate for (IA)^3.") tests/test_custom_models.py::PeftCustomModelTester::test_load_resized_embedding_ignore_mismatched_sizes /workspace//peft/src/peft/utils/save_and_load.py:369: UserWarning: Some weights of PeftModel were not initialized from the model checkpoint and are being ignored because you passed `ignore_mismatched_sizes=True`: - base_model.model.emb.lora_embedding_A.default: found shape torch.Size([8, 100]) in the checkpoint and torch.Size([8, 105]) in the model instantiated. warnings.warn(msg) tests/test_decoder_models.py: 15 warnings /workspace//peft/src/peft/tuners/adalora/model.py:203: UserWarning: fan_in_fan_out is set to False but the target module is `Conv1D`. Setting fan_in_fan_out to True. warnings.warn( tests/test_decoder_models.py: 60 warnings tests/test_multitask_prompt_tuning.py: 7 warnings /workspace//peft/src/peft/peft_model.py:1407: UserWarning: Position ids are not supported for parameter efficient tuning. Ignoring position ids. warnings.warn("Position ids are not supported for parameter efficient tuning. Ignoring position ids.") tests/test_initialization.py::TestInitialization::test_vera_mixing_save_projection_raises tests/test_vera.py::TestVera::test_multiple_adapters_save_load_save_projection_false tests/test_vera.py::TestVera::test_multiple_adapters_save_projection_false_contains_no_vera_A_vera_B /workspace//peft/src/peft/tuners/vera/config.py:153: UserWarning: Specified to not save vera_A and vera_B within the state dictionary, instead they will be restored using the PRNG key store in `config.projection_prng_key`. Consider setting `config.save_projection` to `True` to guarantee restoring the checkpoint correctly on all system configurations. warnings.warn( tests/test_stablediffusion.py: 21 warnings tests/test_tuners_utils.py: 1 warning /torch/venv3/pytorch/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py:236: FutureWarning: The configuration file of the unet has set the default `sample_size` to smaller than 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the following: - CompVis/stable-diffusion-v1-4 - CompVis/stable-diffusion-v1-3 - CompVis/stable-diffusion-v1-2 - CompVis/stable-diffusion-v1-1 - runwayml/stable-diffusion-v1-5 - runwayml/stable-diffusion-inpainting you should change 'sample_size' to 64 in the configuration file. Please make sure to update the config accordingly as leaving `sample_size=32` in the config might lead to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `unet/config.json` file deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) tests/test_stablediffusion.py::StableDiffusionModelTester::test_disable_adapter_4_test_hf_internal_testing_tiny_stable_diffusion_torch_boft tests/test_stablediffusion.py::StableDiffusionModelTester::test_merge_layers_4_test_hf_internal_testing_tiny_stable_diffusion_torch_boft tests/test_stablediffusion.py::StableDiffusionModelTester::test_merge_layers_safe_merge_4_test_hf_internal_testing_tiny_stable_diffusion_torch_boft /workspace//peft/src/peft/tuners/boft/layer.py:223: UserWarning: Unscaling operation for BOFT not supported! Keeping scale to 1. warnings.warn("Unscaling operation for BOFT not supported! Keeping scale to 1.") tests/test_vera.py::TestVera::test_multiple_adapters_save_load_save_projection_false /workspace//peft/src/peft/utils/save_and_load.py:335: UserWarning: Specified to not load vera_A and vera_B from state dictionary. This means we will be relying on PRNG initialisation to restore these projections using `config.projection_prng_key`, which may not be accurate on all system configurations. warnings.warn( -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ---------- coverage: platform linux, python 3.10.8-final-0 ----------- Name Stmts Miss Cover Missing ----------------------------------------------------------------------------------- src/peft/__init__.py 8 0 100% src/peft/auto.py 68 5 93% 52, 81, 88, 100, 108 src/peft/config.py 101 7 93% 61, 116, 146-147, 177, 202-203 src/peft/helpers.py 28 28 0% 1-113 src/peft/import_utils.py 45 17 62% 31-33, 39-44, 58-69 src/peft/mapping.py 36 4 89% 62, 112, 165, 168 src/peft/mixed_model.py 143 24 83% 66, 69-72, 132, 140, 157, 163, 183-185, 227-230, 267, 284, 325, 334, 386-389, 393, 398, 402 src/peft/peft_model.py 932 390 58% 157-160, 165-168, 203, 212, 245, 346, 349-376, 381, 384, 439, 461, 479, 535-559, 572-574, 607-609, 617, 677, 688, 737-804, 853, 870-909, 936, 975, 980-982, 986-989, 995, 1060-1061, 1089, 1105-1157, 1170-1223, 1287-1289, 1300, 1322-1323, 1325-1326, 1347-1348, 1390, 1411-1414, 1518-1522, 1525-1526, 1528-1529, 1572-1602, 1619, 1621-1624, 1626-1629, 1641-1642, 1663-1669, 1681, 1747-1748, 1771-1778, 1792-1845, 1858-1894, 1957-1958, 1981-1988, 2005-2061, 2075-2126, 2187, 2208-2209, 2211-2212 src/peft/tuners/__init__.py 15 0 100% src/peft/tuners/adalora/__init__.py 14 7 50% 27-37 src/peft/tuners/adalora/bnb.py 76 76 0% 15-145 src/peft/tuners/adalora/config.py 18 0 100% src/peft/tuners/adalora/gptq.py 32 27 16% 30-37, 40-68 src/peft/tuners/adalora/layer.py 220 97 56% 31, 50, 55, 77, 127, 143, 153-154, 170, 192-193, 218, 237-253, 257-274, 279, 282-284, 287-336, 340-348, 352-361 src/peft/tuners/adalora/model.py 158 78 51% 74, 100, 125, 132, 154-156, 158, 172-180, 182-190, 192, 196-200, 209, 221, 239-261, 265-293, 296-309, 332-351 src/peft/tuners/adaption_prompt/__init__.py 4 0 100% src/peft/tuners/adaption_prompt/config.py 24 1 96% 73 src/peft/tuners/adaption_prompt/layer.py 46 2 96% 68, 82 src/peft/tuners/adaption_prompt/model.py 82 4 95% 65, 73, 100, 102 src/peft/tuners/adaption_prompt/utils.py 50 11 78% 47-50, 82, 96-101, 106 src/peft/tuners/boft/__init__.py 4 0 100% src/peft/tuners/boft/config.py 24 2 92% 128, 130 src/peft/tuners/boft/fbd/__init__.py 0 0 100% src/peft/tuners/boft/layer.py 465 67 86% 45, 57, 94-96, 100-102, 132, 196, 202-206, 212-216, 221, 234, 247, 253, 257, 265, 271, 275, 282, 290, 293, 319, 342, 377, 450, 489, 511-512, 541, 552, 571, 582, 593, 613-614, 644, 659, 665, 676, 682, 687-692, 700, 706, 710, 717, 722, 725, 751, 817-818, 857, 868, 889, 900, 911, 942-943 src/peft/tuners/boft/model.py 154 27 82% 103, 148-152, 173-178, 183, 189-193, 198, 213-219, 233-237, 244-245, 253, 293 src/peft/tuners/ia3/__init__.py 13 7 46% 26-36 src/peft/tuners/ia3/bnb.py 67 67 0% 15-129 src/peft/tuners/ia3/config.py 18 0 100% src/peft/tuners/ia3/layer.py 176 8 95% 44, 50, 137-138, 242, 261-262, 292 src/peft/tuners/ia3/model.py 170 32 81% 82-84, 87, 94, 99-108, 110-118, 140, 213-217, 232-238, 277-278, 285, 289, 312, 315, 334-336, 383 src/peft/tuners/loha/__init__.py 4 0 100% src/peft/tuners/loha/config.py 19 0 100% src/peft/tuners/loha/layer.py 184 40 78% 117, 139, 157, 252-253, 295-296, 311-322, 337-367 src/peft/tuners/loha/model.py 20 0 100% src/peft/tuners/lokr/__init__.py 4 0 100% src/peft/tuners/lokr/config.py 20 0 100% src/peft/tuners/lokr/layer.py 185 21 89% 82-88, 111, 127, 156, 188, 206, 219, 294-295, 339-340, 377-379, 399-400 src/peft/tuners/lokr/model.py 20 0 100% src/peft/tuners/lora/__init__.py 17 10 41% 27-42 src/peft/tuners/lora/aqlm.py 46 29 37% 25, 40-44, 48-71, 74-75, 97-98 src/peft/tuners/lora/awq.py 53 34 36% 26, 41-49, 52-75, 78-79, 96-106 src/peft/tuners/lora/bnb.py 265 265 0% 14-508 src/peft/tuners/lora/config.py 46 6 87% 290-295, 299 src/peft/tuners/lora/eetq.py 53 40 25% 24-82, 98-102 src/peft/tuners/lora/gptq.py 49 31 37% 37-47, 59-82, 85-86, 111-112 src/peft/tuners/lora/layer.py 582 71 88% 68-84, 94, 113, 125, 138, 148, 156-174, 246-249, 255-259, 264, 267, 445-446, 480-481, 486-490, 551, 566, 571, 587, 627, 641-642, 668-669, 674-678, 698, 742, 787, 792, 810, 868, 895-896, 930-931, 950-954, 1016, 1060-1064 src/peft/tuners/lora/model.py 369 64 83% 181, 205, 242-246, 268-273, 283-285, 288-290, 304, 319-325, 347-351, 372-373, 407, 409, 415, 464, 501, 505, 507, 513, 519, 577, 580, 606-610, 621-622, 629, 685, 699, 703-707, 718-722, 724-725, 745-749, 768, 780 src/peft/tuners/lora/tp_layer.py 106 85 20% 50-86, 102-153, 156-188, 205, 213-226 src/peft/tuners/lycoris_utils.py 209 39 81% 83, 92-95, 99, 108, 136, 147, 150-153, 159-163, 170-171, 180, 183, 189, 224, 245-246, 261-262, 277, 291-295, 316, 336-338, 384, 404-405, 417 src/peft/tuners/mixed/__init__.py 2 0 100% src/peft/tuners/mixed/model.py 188 35 81% 67, 74, 100, 112, 120-124, 143-153, 160, 165, 178, 200-204, 211-212, 220, 236, 265-271, 276, 286, 292 src/peft/tuners/multitask_prompt_tuning/__init__.py 3 0 100% src/peft/tuners/multitask_prompt_tuning/config.py 20 0 100% src/peft/tuners/multitask_prompt_tuning/model.py 47 4 91% 37, 64, 70-72 src/peft/tuners/oft/__init__.py 4 0 100% src/peft/tuners/oft/config.py 19 0 100% src/peft/tuners/oft/layer.py 189 10 95% 80, 97, 117, 176, 188-189, 352-353, 387-388 src/peft/tuners/oft/model.py 18 0 100% src/peft/tuners/p_tuning/__init__.py 3 0 100% src/peft/tuners/p_tuning/config.py 16 0 100% src/peft/tuners/p_tuning/model.py 34 7 79% 84-96, 119, 124, 128 src/peft/tuners/poly/__init__.py 4 0 100% src/peft/tuners/poly/config.py 17 0 100% src/peft/tuners/poly/layer.py 93 12 87% 49, 56, 90, 109-114, 143, 170-171 src/peft/tuners/poly/model.py 109 24 78% 53, 62, 72, 75-77, 80-84, 100, 107, 120-126, 140-142, 147, 156 src/peft/tuners/poly/router.py 40 5 88% 31, 41, 45, 66, 68 src/peft/tuners/prefix_tuning/__init__.py 3 0 100% src/peft/tuners/prefix_tuning/config.py 9 0 100% src/peft/tuners/prefix_tuning/model.py 19 4 79% 65-66, 76-77 src/peft/tuners/prompt_tuning/__init__.py 3 0 100% src/peft/tuners/prompt_tuning/config.py 22 0 100% src/peft/tuners/prompt_tuning/model.py 30 0 100% src/peft/tuners/tuners_utils.py 336 46 86% 61-62, 71-88, 93, 97-106, 159, 212, 240, 272, 282, 361, 417, 469, 481, 484, 648, 722, 762, 769-774, 776, 793-798 src/peft/tuners/vera/__init__.py 4 0 100% src/peft/tuners/vera/buffer_dict.py 61 25 59% 76, 83, 92-94, 102, 123, 130-131, 134-147, 150-157, 160 src/peft/tuners/vera/config.py 24 0 100% src/peft/tuners/vera/layer.py 138 14 90% 79, 82, 97, 115, 191-192, 222-225, 232-237, 254 src/peft/tuners/vera/model.py 218 36 83% 61, 129, 142-143, 175, 213, 259-263, 280-289, 296, 302-306, 316, 340-346, 360-364, 371-372, 380, 421 src/peft/utils/__init__.py 4 0 100% src/peft/utils/constants.py 29 0 100% src/peft/utils/integrations.py 34 18 47% 28, 34-38, 48, 54-68 src/peft/utils/loftq_utils.py 233 202 13% 36-48, 52-60, 64-86, 89-102, 105-112, 115-153, 157-169, 176-186, 191-238, 243-259, 271-309, 312-328, 367-410 src/peft/utils/merge_utils.py 79 7 91% 91-92, 94, 100, 120-123 src/peft/utils/other.py 299 103 66% 75, 77, 80-84, 123, 127-155, 169-178, 214, 223-226, 231-234, 244-252, 269, 296, 339-342, 366, 377, 390, 400-438, 455-459, 469, 487, 496-517, 537-539, 561-565, 577, 581, 593, 600-601 src/peft/utils/peft_types.py 23 0 100% src/peft/utils/save_and_load.py 214 37 83% 83-92, 97-99, 105-116, 155, 163, 206-209, 237, 244, 322, 325, 329, 343, 392, 429-430, 437 ----------------------------------------------------------------------------------- TOTAL 8030 2312 71% ============================= slowest 10 durations ============================= 23.27s call tests/test_auto.py::PeftAutoModelTester::test_peft_feature_extraction 22.89s call tests/test_decoder_models.py::PeftDecoderModelTester::test_prepare_for_training_parametrized_20_test_hf_internal_testing_tiny_random_GPT2LMHeadModel_prompt_tuning 18.96s call tests/test_poly.py::TestPoly::test_poly 18.24s call tests/test_decoder_models.py::PeftDecoderModelTester::test_save_pretrained_selected_adapters_pickle_04_test_hf_internal_testing_tiny_random_OPTForCausalLM_prompt_tuning 17.17s call tests/test_decoder_models.py::PeftDecoderModelTester::test_save_pretrained_selected_adapters_pickle_45_test_hf_internal_testing_tiny_random_GPTJForCausalLM_boft 17.01s call tests/test_encoder_decoder_models.py::PeftEncoderDecoderModelTester::test_save_pretrained_selected_adapters_07_test_ybelkada_tiny_random_T5ForConditionalGeneration_calibrated_vera 16.70s call tests/test_decoder_models.py::PeftDecoderModelTester::test_save_pretrained_selected_adapters_pickle_09_test_hf_internal_testing_tiny_random_GPTNeoXForCausalLM_lora 16.68s call tests/test_decoder_models.py::PeftDecoderModelTester::test_save_pretrained_selected_adapters_10_test_hf_internal_testing_tiny_random_GPTNeoXForCausalLM_prefix_tuning 16.64s call tests/test_decoder_models.py::PeftDecoderModelTester::test_save_pretrained_selected_adapters_18_test_hf_internal_testing_tiny_random_GPT2LMHeadModel_prefix_tuning 16.59s call tests/test_decoder_models.py::PeftDecoderModelTester::test_save_pretrained_selected_adapters_pickle_00_test_hf_internal_testing_tiny_random_OPTForCausalLM_ia3 == 4368 passed, 1483 skipped, 1 xfailed, 2981 warnings in 5991.03s (1:39:51) ===