-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to make pruner to support FPN like structure? #79
Comments
Could you upload pruner config? |
I'm very sorry for the inconvenience to you. |
This is what I concerned.
I have done this by passing the prebuilt channel space (in txt format) to my reimplemented autoslim. The parser is hard to deal with all the network architectures. the same problem can be found in nni(https://nni.readthedocs.io/en/stable/Compression/ModelSpeedup.html#limitations). the channel space can be generated by nni or mmrazor and saved it with text file, and can be modified by the users if the channel dependencies are not correctly built. What is your opinion? |
Sounds great! |
Sure. |
Before our open source version, most popular models can be handled correctly, such as ResNet, MobileNet, RetinaNet, Yolox, etc. Probably something went wrong when we refactored the code. |
Hi! This bug has been fixed in pr#126. |
Hi, Can you please upload the prune config file? I used the way you referred but still got errors. Did you successfully to run autoslim on object detection task? Thanks. |
I have not tried the latest mmraor. Did you try? |
I tried using latest one but still failed. I am not sure if I gave wrong
config or there is still a bug there.
…On Fri, Apr 15, 2022 at 22:21 Ming-Hsuan-Tu ***@***.***> wrote:
@Bing1002 <https://github.com/Bing1002>
I have not tried the latest mmraor. Did you try?
—
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC27F5KWTFIXTVACVCLKRZ3VFIPZ3ANCNFSM5ODP7CIQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I'm very sorry for the inconvenience to you.
|
Hi, thanks for your reply. I tried this config but still failed. Here is the config:
And the error is the same as usual:
Looking forward your response. Thanks. |
You may try to set optimizer_config to None. |
After changing that part, now I can run pruning. Could you please explain why that setting matters? |
They call optimizer.step() in autoslim, not by mmcv hook. Setting optimizer_config to None would not register mmcv hook and you would not call optimizer.step() twice. |
Thanks. Then it seems the return loss become nan.
Do you have any idea about it? Thanks a lot! |
We have not verified whether AutoSlim works on object detection. Maybe you can try to prune |
Do you detach the teacher's output in the loss function? Such as here. |
he did not use distilling. |
My bad. |
In fact, I have implemented my own autoslim, it's quite different from mmrazor, the memory usage is much efficient than mmrazor. I use grad clip to clip the gradient in object detection, without distilling the result is satisfied. but when applying the distilling like cwd, the result is bad. You may try grad clip if you hava nan in the beginning of training. by the way, most anytime network (like BigNAS) does not explain how they use distilling in object detection, I am exploring this and I am looking forward your experiment on this. |
I will appreciate it if you can share how to save memory in your implementation. And we will improve our codes based on that. |
* [Refactor] Refactor configs according to new standard (open-mmlab#67) * modify cfg and cfg_util * modify tensorrt config * fix bug * lint * Fix 1. Delete print 2. Modify the return value from "False, None" to "None" and related code 3. Rename 2 get functions * modify apply_marks * [Feature] Refactor ocr config (open-mmlab#71) * add text detection config refactor * add text recognition refactor * add static exporting for mmocr * fix lint * set max space in child config * use Sequence[int] instead * add assert input_shape * fix static bug and add ppl ort and trt static (open-mmlab#77) * [Feature] Refine setup.py (open-mmlab#61) * add setup.py and related files * lint * Edit requirements * modify onnx version * modify according to comments * [Refactor] Refactor mmseg configs (open-mmlab#73) * refactor mmseg config * change create_input * fix lint * fix lint * fix lint * fix yapf * fix yapf * update export * remove Segmentation * remove tast assert * add onnx_config * remove hardcode * Inherit with static * Remove blank line * Add segmentation task enum * add assert task * mmocr version 0.3.0 (open-mmlab#79) * add dump_info * [Feature]: Refactor config in mmdet (open-mmlab#75) * support onnxruntime * add two stage * test two-stage ort and ppl * update fcos post_params * fix calib * test ok with maskrcnn dynamic * add empty line * add static into config filename * add input_shape to create_input in mmdet * add static to some configs * remove todo codes * remove partition config in base * refactor create_input * rename task name in mmdet * return None if input_shape is None * add size info into mmdet configs filenames * reorganize mmdet configs * add object detection task for mmdet * rename get_mmdet_params * keep naming style consistent * update post_params for fcos * fix typo in ncnn config * [Refactor] Refactor mmedit static config (open-mmlab#78) * add static cfg * update create_input * [Refactor]: Refactor mmcls configs (open-mmlab#74) * refactor mmcls2.0 * fix classify_tensorrt_dynamic.py * fix classify_tensorrt_dynmic.py * classify_tensorrt_dynamic_int8.py * fix file name * fix ncnn ppl * updata prepare_input.py * update utils.py * updata constant.py * add * fix prepare_input.py * fix prepare_input.py * add static config file * add blank lines * fix prepare_input.py(wait test) * fix input_shape(wait test) * Update prepare_input.py * fix classification_tensorrt_dynamic(wait test) * fix classification_tensorrt_dynamic_int8(wait test) * fix classification_tensorrt_static_int8(wait test) * Rename classification_tensorrt_dynamic.py to classification_tensorrt_dynamic-224x224-224x224.py * Rename classification_tensorrt_dynamic_int8.py to classification_tensorrt_dynamic_int8-224x224-224x224.py * Rename classification_tensorrt_dynamic_int8-224x224-224x224.py to classification_tensorrt_int8_dynamic_224x224-224x224.py * Rename classification_tensorrt_dynamic-224x224-224x224.py to classification_tensorrt_dynamic_224x224-224x224.py * Rename classification_tensorrt_static.py to classification_tensorrt_static_224x224.py * Rename classification_tensorrt_static_int8.py to classification_tensorrt_int8_static_224x224.py * Update prepare_input.py * Rename classification_tensorrt_dynamic_224x224-224x224.py to classification_tensorrt_dynamic-224x224-224x224.py * Rename classification_tensorrt_int8_dynamic_224x224-224x224.py to classification_tensorrt_int8-dynamic_224x224-224x224.py * Rename classification_tensorrt_int8-dynamic_224x224-224x224.py to classification_tensorrt_int8_dynamic-224x224-224x224.py * Rename classification_tensorrt_int8_static_224x224.py to classification_tensorrt_int8_static-224x224.py * Rename classification_tensorrt_static_224x224.py to classification_tensorrt_static-224x224.py * Update prepare_input.py * Update prepare_input.py * Update prepare_input.py * Update prepare_input.py * Update prepare_input.py * Update prepare_input.py * Update prepare_input.py * change logging msg Co-authored-by: maningsheng <mnsheng@yeah.net> * fix * fix else branch * fix bug for trt in mmseg * enable dump trt info * fix trt static for mmdet * remove two-stage_partition_tensorrt_static-800x1344 config * fix wrong backend in ppl config * fix partition calibration Co-authored-by: Yifan Zhou <singlezombie@163.com> Co-authored-by: AllentDan <41138331+AllentDan@users.noreply.github.com> Co-authored-by: hanrui1sensetime <83800577+hanrui1sensetime@users.noreply.github.com> Co-authored-by: RunningLeon <maningsheng@sensetime.com> Co-authored-by: VVsssssk <88368822+VVsssssk@users.noreply.github.com> Co-authored-by: maningsheng <mnsheng@yeah.net> Co-authored-by: AllentDan <AllentDan@yeah.net>
I am trying to prune from mmdet (https://github.com/open-mmlab/mmdetection/blob/master/configs/atss/atss_r50_fpn_1x_coco.py)
But it throw the exception when forwarding with FPN.
Any idea?
By the way, I think it's better to let users to configure the whole block as a group (like neck and bbox_head) which sharing the mask, since these blocks are always complicated, and the parsers are hard to modify to deal with these cases.
The text was updated successfully, but these errors were encountered: