Skip to content

Commit

Permalink
fix: copy model to cpu for quantized inference
Browse files Browse the repository at this point in the history
  • Loading branch information
julianhoever committed Oct 6, 2023
1 parent e18f46f commit 0c5d88e
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -53,5 +53,5 @@ def _stepped_inputs(self, x: torch.Tensor) -> torch.Tensor:
def _quantized_inference(self, x: int) -> int:
fxp_input = self._config.as_rational(x)
with torch.no_grad():
output = self(torch.tensor(fxp_input))
output = self.cpu()(torch.tensor(fxp_input))
return self._config.as_integer(float(output.item()))

0 comments on commit 0c5d88e

Please sign in to comment.