We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tacotron2
Running the upstreamed benchmarking scripts with the following command results in an unexpected error.
python xla/benchmarks/experiment_runner.py \ --suite-name torchbench \ --accelerator cuda \ --xla PJRT --xla None \ --dynamo openxla --dynamo None \ --test train \ --repeat 30 --iterations-per-run 5 \ --print-subprocess \ --no-resume -k tacotron2
Traceback (most recent call last): File ""xla/benchmarks/experiment_runner.py"", line 604, in <module> main() File ""xla/benchmarks/experiment_runner.py"", line 600, in main runner.run() File ""xla/benchmarks/experiment_runner.py"", line 65, in run self.run_single_experiment(experiment_config, model_config) File ""xla/benchmarks/experiment_runner.py"", line 161, in run_single_experiment run_metrics, output = self.timed_run(benchmark_experiment, File ""xla/benchmarks/experiment_runner.py"", line 328, in timed_run output = loop() File ""xla/benchmarks/experiment_runner.py"", line 310, in loop output = benchmark_model.model_iter_fn( File ""xla/benchmarks/benchmark_model.py"", line 154, in eval pred = self.module(*inputs) File ""torch/nn/modules/module.py"", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File ""torch/nn/modules/module.py"", line 1520, in _call_impl return forward_call(*args, **kwargs) File ""/home/ysiraichi/benchmark/torchbenchmark/models/tacotron2/model.py"", line 505, in forward encoder_outputs = self.encoder(embedded_inputs, text_lengths) File ""torch/nn/modules/module.py"", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File ""torch/nn/modules/module.py"", line 1520, in _call_impl return forward_call(*args, **kwargs) File ""/home/ysiraichi/benchmark/torchbenchmark/models/tacotron2/model.py"", line 185, in forward outputs, _ = self.lstm(x) File ""torch/nn/modules/module.py"", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File ""torch/nn/modules/module.py"", line 1520, in _call_impl return forward_call(*args, **kwargs) File ""torch/nn/modules/rnn.py"", line 881, in forward result = _VF.lstm(input, batch_sizes, hx, self._flat_weights, self.bias, RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet. Caffe2 uses a lazy allocation, so you will need to call mutable_data() or raw_mutable_data() to actually allocate memory.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
🐛 Bug
Running the upstreamed benchmarking scripts with the following command results in an unexpected error.
Environment
The text was updated successfully, but these errors were encountered: