[torchbench] vision_maskrcnn
failing on inference with dynamo after bfloat16
conversion.
#6557
Labels
vision_maskrcnn
failing on inference with dynamo after bfloat16
conversion.
#6557
🐛 Bug
After converting the
vision_maskrcnn
model tobfloat16
, running inference on dynamo raises the following error:python xla/benchmarks/experiment_runner.py \ --suite-name torchbench --accelerator cuda \ --xla PJRT --dynamo openxla --test eval \ --no-resume --print-subprocess \ -k vision_maskrcnn
Environment
cc @miladm @JackCaoG
The text was updated successfully, but these errors were encountered: