-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Mergekit]update & add LoRA merge #9811
base: develop
Are you sure you want to change the base?
Conversation
paddlenlp/mergekit/merge_method.py
Outdated
weights = paddle.to_tensor(weight_list, dtype=stacked_tensors.dtype) | ||
weights = weights.reshape([-1] + [1] * (len(stacked_tensors.shape) - 1)) | ||
weighted_sum = paddle.sum(stacked_tensors * weights, axis=0) | ||
weighted_sum = paddle.zeros_like(tensor_list[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
测试发现for循环实现比stack后快,所以更换为stack方式
tensor *= mask | ||
if self.merge_config.rescale: | ||
tensor /= self.merge_config.reserve_p | ||
mode = "upscale_in_train" if self.merge_config.rescale else "downscale_in_infer" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
改为dropout更高效的方式
return tensor | ||
else: | ||
raise ValueError(f"Unkonwn tensor type {self.merge_config.tensor_type}") | ||
|
||
def magprune(self, tensor): | ||
if self.merge_config.tensor_type == "np": | ||
if np.all(tensor == 0): | ||
if not np.any(tensor != 0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
测试发现,any的方式比all更高效
Codecov ReportAttention: Patch coverage is
❌ Your patch check has failed because the patch coverage (66.98%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## develop #9811 +/- ##
===========================================
+ Coverage 51.27% 52.24% +0.96%
===========================================
Files 735 730 -5
Lines 121550 115724 -5826
===========================================
- Hits 62329 60458 -1871
+ Misses 59221 55266 -3955 ☔ View full report in Codecov by Sentry. |
weights = weights.reshape([-1] + [1] * (len(stacked_tensors.shape) - 1)) | ||
weighted_sum = paddle.sum(stacked_tensors * weights, axis=0) | ||
return weighted_sum | ||
tensor_output = paddle.zeros_like(tensor_list[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stack tensor耗时长于直接处理,后面修改理由一样
@@ -29,3 +30,44 @@ def divide_positions(m, n): | |||
positions.append(positions[-1] + base_value) | |||
positions.append(m) | |||
return positions | |||
|
|||
|
|||
def divide_lora_key_list(key_list, n, lora_config): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
发现按key_list直接切分可能导致分配不均,不同卡速度不同,所以进行一些优化
self.mergekit() | ||
self.copy_file() | ||
|
||
def copy_file(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
用于copy tokenizer相关文件
tensor_mem = int(np.prod(tensor_list[0].shape) * self.numpy_dtype_map[str(tensor_list[0].dtype)]) / ( | ||
1024**3 | ||
) | ||
if self.merge_config.tensor_type == "pd" and tensor_mem > self.merge_config.max_tensor_mem: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
处理类似word embedding等超大tensor,防止oom
tensor_list = [paddle.Tensor(tensor, zero_copy=True) for tensor in tensor_list] | ||
elif self.merge_config.tensor_type == "np" and is_bf16: | ||
tensor_list = [ | ||
paddle.Tensor(tensor, zero_copy=True).astype("float32").numpy() for tensor in tensor_list |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
paddle.Tensor(tensor, zero_copy=True)在cpu场景比paddle.to_tensor快
if use_gpu: | ||
positions = divide_positions(len(key_list), dist.get_world_size()) | ||
num = self.merge_config.n_process if self.is_cpu else dist.get_world_size() | ||
if file_type_list[0] == "safetensors" and len(set(index_list[0]["weight_map"].values())) >= num: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果safetensor有多片按照片数切分更加均衡
PR types
New features
PR changes
APIs
Description