-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torchbench] Check failed: cachedComputation #5967
Comments
hmm it is in https://github.com/pytorch/xla/blob/master/torch_xla/csrc/xla_graph_executor.cpp#L621-L624 One possible reason is that this model compiles way too many times so LRU cache kick out one of the graphs. You can try to increase the default cache size in xla/torch_xla/csrc/xla_graph_executor.cpp Lines 475 to 480 in 2c4983d
If this is not the case, it would be really weird since for every cache we should store a computation in dynamo compilation phase. |
Just checked with different |
looks like these models are skipped by torch.bench for inductor. Do we know what their error would be had they not skipped? |
You can add print statements to both dynamo bridge and LRU cache to print the hashed being inserted into cache. You should be able to see when the compiled program is injected into the cache. If you never see the cached computation with hash being injected, there is a bug in the dynamo bridge in computing the hash. |
|
Those two got skipped because |
That's odd. Last I tried (a8b27eb) they were still passing on inductor.
I will try running them again on master. |
Oops. I think I misinterpreted your question. So, on torchbench they are skipped only if we try to export those models. Otherwise, they should pass (I think). |
Sorry, I mean when I try |
I have looked into this issue, and contrary to what @zpcore found, I successfully run with Running I would say we should:
@zpcore @miladm @golechwierowicz @cota @frgossen |
|
🐛 Bug
Running a few torchbench benchmarks, using dynamo+openxla backend, ends up in an assertion failure:
Stack Trace
Affected Benchmarks
Environment
The text was updated successfully, but these errors were encountered: