Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Benchmark] Add inference test for current benchmarking and record performance #4892

Merged
merged 29 commits into from
Jul 13, 2022

Conversation

yanbing-j
Copy link
Contributor

@yanbing-j yanbing-j commented Jun 30, 2022

Currently, PyG benchmark only contains training test. It is not convenient for us to optimize starting from inference. In this PR, we add inference test for citation benchmark and to_hetero_mag, and record end-to-end time in one epoch.

@codecov
Copy link

codecov bot commented Jun 30, 2022

Codecov Report

Merging #4892 (fea0f43) into master (da7be5d) will decrease coverage by 1.83%.
The diff coverage is 94.73%.

❗ Current head fea0f43 differs from pull request most recent head fd39e26. Consider uploading reports for the commit fd39e26 to get more accurate results

@@            Coverage Diff             @@
##           master    #4892      +/-   ##
==========================================
- Coverage   84.65%   82.82%   -1.84%     
==========================================
  Files         330      330              
  Lines       17908    17924      +16     
==========================================
- Hits        15160    14845     -315     
- Misses       2748     3079     +331     
Impacted Files Coverage Δ
torch_geometric/profile/profile.py 48.23% <94.44%> (+12.41%) ⬆️
torch_geometric/profile/__init__.py 100.00% <100.00%> (ø)
torch_geometric/nn/models/dimenet_utils.py 0.00% <0.00%> (-75.52%) ⬇️
torch_geometric/nn/models/dimenet.py 14.51% <0.00%> (-53.00%) ⬇️
torch_geometric/nn/conv/utils/typing.py 81.25% <0.00%> (-17.50%) ⬇️
torch_geometric/nn/inits.py 67.85% <0.00%> (-7.15%) ⬇️
torch_geometric/nn/resolver.py 88.00% <0.00%> (-6.00%) ⬇️
torch_geometric/io/tu.py 93.90% <0.00%> (-2.44%) ⬇️
torch_geometric/nn/models/mlp.py 94.44% <0.00%> (-1.39%) ⬇️
torch_geometric/transforms/gdc.py 78.17% <0.00%> (-1.02%) ⬇️
... and 4 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update da7be5d...fd39e26. Read the comment docs.

@yanbing-j yanbing-j force-pushed the yanbing/benchmark_inference branch from f88bc93 to 5eb0491 Compare July 1, 2022 04:45
@yanbing-j yanbing-j changed the title [benchmark] Add inference test for citation benchmark [benchmark] Add inference test for citation benchmark and to_hetero_mag Jul 1, 2022
@yanbing-j
Copy link
Contributor Author

Hi @rusty1s , I retest the failure of test_temporal_heterogeneous_link_neighbor_loader in the master branch in my local, it will get the same error as that in this PR. Do I run the test by mistake, or any env mistake? pytest test_link_neighbor_loader.py::test_temporal_heterogeneous_link_neighbor_loader. Thank you!

@rusty1s
Copy link
Member

rusty1s commented Jul 1, 2022

Will have a look!

@yanbing-j yanbing-j changed the title [benchmark] Add inference test for citation benchmark and to_hetero_mag [benchmark] Add inference test for current benchmarking and record performance Jul 6, 2022
Copy link
Member

@rusty1s rusty1s left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! This looks good. Mostly left comments regarding styling and re-usability of code blocks. Let me know what you think!

benchmark/citation/appnp.py Outdated Show resolved Hide resolved
benchmark/citation/appnp.py Outdated Show resolved Hide resolved
benchmark/citation/appnp.py Show resolved Hide resolved
benchmark/citation/inference.sh Outdated Show resolved Hide resolved
benchmark/citation/train_eval.py Outdated Show resolved Hide resolved
benchmark/citation/train_eval.py Outdated Show resolved Hide resolved
benchmark/citation/train_eval.py Outdated Show resolved Hide resolved
benchmark/points/train_eval.py Outdated Show resolved Hide resolved
benchmark/points/train_eval.py Outdated Show resolved Hide resolved
examples/hetero/to_hetero_mag.py Outdated Show resolved Hide resolved
Copy link
Contributor

@lightaime lightaime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Looks great. I added some nitpicky comments :). Some styling issues can be fixed by installing the pre-commit hooks: pre-commit install.

benchmark/citation/appnp.py Outdated Show resolved Hide resolved
benchmark/citation/appnp.py Outdated Show resolved Hide resolved
@yanbing-j yanbing-j force-pushed the yanbing/benchmark_inference branch from 14cc047 to 9bd310a Compare July 11, 2022 08:04
@yanbing-j yanbing-j requested review from rusty1s and lightaime July 11, 2022 11:34
Copy link
Member

@rusty1s rusty1s left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! I think this looks great. My only concern is that I would like to keep PyG examples lightweight - for PNA, we could think about adding examples to benchmark/kernel, for benchmarking to_hetero_mag.py we could implement a separate script in benchmark/ (okay to be done in a follow-up PR).

benchmark/citation/appnp.py Outdated Show resolved Hide resolved
benchmark/citation/train_eval.py Outdated Show resolved Hide resolved
benchmark/citation/train_eval.py Outdated Show resolved Hide resolved
benchmark/citation/train_eval.py Outdated Show resolved Hide resolved
torch_geometric/profile/profile.py Show resolved Hide resolved
test/profile/test_profile.py Outdated Show resolved Hide resolved
@yanbing-j
Copy link
Contributor Author

Thank you! I think this looks great. My only concern is that I would like to keep PyG examples lightweight - for PNA, we could think about adding examples to benchmark/kernel, for benchmarking to_hetero_mag.py we could implement a separate script in benchmark/ (okay to be done in a follow-up PR).

Hi @rusty1s , thank you for your review. I will remove PNA and to_hetero_mag to benchmark in the next PR of inference benchmark. And I find the codecov will fail in profile.py, but I have add the test. Could you please give me some advises?

Copy link
Contributor

@lightaime lightaime left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! LGTM.

@rusty1s
Copy link
Member

rusty1s commented Jul 12, 2022

@yanbing-j Currently, test_profile is only executed on machines with GPU (see the @withCUDA decorator). Can we write a short test on CPU as well?

@yanbing-j
Copy link
Contributor Author

@yanbing-j Currently, test_profile is only executed on machines with GPU (see the @withCUDA decorator). Can we write a short test on CPU as well?

Hi @rusty1s , It seems no @withCPU currently. So I write the test both can run on CPU and GPU.

@rusty1s
Copy link
Member

rusty1s commented Jul 12, 2022

Yes, this sounds good.

@yanbing-j
Copy link
Contributor Author

@rusty1s Could you please help merge this to master?

@rusty1s rusty1s changed the title [benchmark] Add inference test for current benchmarking and record performance [Benchmark] Add inference test for current benchmarking and record performance Jul 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants