-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loading pre-trained model configuration from Python file #4737
Comments
|
How can we see the demo of python configs such as MViT? and If we trained MViTv2 does do instance segmentation? |
Since I struggled to get Densepose working, I wanted to share my function for working with it. Hopefully other people can get going quicker with this as a guide.
|
I also wanted to mention that I haven't been able to figure out how to run inference on a pre-trained model defined by a Python LazyConfig. The closest guidance I could find on this is here. However, that script only seems to apply for evaluating common datasets. More work is required to apply the model to a custom dataset. There doesn't seem to be an easy way to run inference with LazyConfig models. |
Then follow https://detectron2.readthedocs.io/en/latest/tutorials/models.html#load-save-a-checkpoint to load a checkpoint. |
@ppwwyyxx I tried following your recommendation with the following code:
This generated the following error:
Any idea what my problem is? Thanks again for your help! |
Same as
#3972 (comment)
…On Sat, Jan 14, 2023, 3:25 AM buckeye17 ***@***.***> wrote:
@ppwwyyxx <https://github.com/ppwwyyxx> I tried following your
recommendation with the following code:
cfg = LazyConfig.load(args.config_file)
cfg = LazyConfig.load("configs/new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py")
model = instantiate(cfg.model)
from detectron2.checkpoint import DetectionCheckpointer
DetectionCheckpointer(model).load("detectron2://new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ/42073830/model_final_f96b26.pkl") # load a file, usually from cfg.MODEL.WEIGHTS
# use PIL, to be consistent with evaluation
img = torch.from_numpy(np.ascontiguousarray(read_image(img_path, format="BGR")))
img = img.permute(2, 0, 1) # HWC -> CHW
if torch.cuda.is_available():
img = img.cuda()
inputs = [{"image": img}]
model.eval()
with torch.no_grad():
predictions = model(inputs)
This generated the following error:
Traceback (most recent call last):
File "/app/Anthropometry/quality_test.py", line 108, in <module>
results_ls = get_person_seg_masks(img_path, model_family, model)
File "/app/Anthropometry/detectron2_wrapper.py", line 152, in get_person_seg_masks
predictions = model(inputs)
File "/home/appuser/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/appuser/detectron2_repo/detectron2/modeling/meta_arch/rcnn.py", line 150, in forward
return self.inference(batched_inputs)
File "/home/appuser/detectron2_repo/detectron2/modeling/meta_arch/rcnn.py", line 204, in inference
features = self.backbone(images.tensor)
File "/home/appuser/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/appuser/detectron2_repo/detectron2/modeling/backbone/fpn.py", line 139, in forward
bottom_up_features = self.bottom_up(x)
File "/home/appuser/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/appuser/detectron2_repo/detectron2/modeling/backbone/resnet.py", line 445, in forward
x = self.stem(x)
File "/home/appuser/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/appuser/detectron2_repo/detectron2/modeling/backbone/resnet.py", line 356, in forward
x = self.conv1(x)
File "/home/appuser/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/appuser/detectron2_repo/detectron2/layers/wrappers.py", line 117, in forward
x = self.norm(x)
File "/home/appuser/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/appuser/.local/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 683, in forward
raise ValueError("SyncBatchNorm expected input tensor to be on GPU")
ValueError: SyncBatchNorm expected input tensor to be on GPU
Any idea what my problem is? Thanks again for your help!
—
Reply to this email directly, view it on GitHub
<#4737 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKRHNMUDRDTQNZHGRMZDILWSKEJXANCNFSM6AAAAAATQQMO7Q>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Alright, well thanks to the help of @ppwwyyxx, I was able to run inference on a LazyConfig model with the following script!
|
This works for me! Thanks! |
📚 Documentation Issue
I'm struggling to load the pre-trained model defined by
new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ.py
.I've found relevant documentation here, here and issue #3225. However none of these clearly elucidate my error.
I'm trying to load the configuration with:
This produces the following traceback:
The text was updated successfully, but these errors were encountered: