Skip to content

Commit

Permalink
[fbsync] Change torch.arange dtype from torch.float32 to torch.int32 …
Browse files Browse the repository at this point in the history
…in anchor_utils.py (#4395) (#4409)

Summary:

Reviewed By: datumbox

Differential Revision: D31268024

fbshipit-source-id: 0294ad05fc94bdf5a6d3eba50d85813d568e8fbe

Co-authored-by: Julien RIPOCHE <ripoche@magic-lemp.com>
Co-authored-by: Vasilis Vryniotis <datumbox@users.noreply.github.com>
  • Loading branch information
3 people authored and facebook-github-bot committed Sep 30, 2021
1 parent 54f89b9 commit 646d5e4
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions torchvision/models/detection/anchor_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,10 +97,10 @@ def grid_anchors(self, grid_sizes: List[List[int]], strides: List[List[Tensor]])

# For output anchor, compute [x_center, y_center, x_center, y_center]
shifts_x = torch.arange(
0, grid_width, dtype=torch.float32, device=device
0, grid_width, dtype=torch.int32, device=device
) * stride_width
shifts_y = torch.arange(
0, grid_height, dtype=torch.float32, device=device
0, grid_height, dtype=torch.int32, device=device
) * stride_height
shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x)
shift_x = shift_x.reshape(-1)
Expand Down

0 comments on commit 646d5e4

Please sign in to comment.