Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add batch_size argument to normalization layers #7135

Merged

Conversation

kamil-andrzejewski
Copy link
Contributor

It can speedup runtime because:

  1. We do not need to go through the batch dimension and look for max value.
  2. We do not have to read tensor value which is placed on the device.

Besides dim_size can be used if a user is using fixed size datasets.

@codecov
Copy link

codecov bot commented Apr 7, 2023

Codecov Report

Merging #7135 (1e5cc70) into master (1af056c) will decrease coverage by 0.04%.
The diff coverage is 100.00%.

❗ Current head 1e5cc70 differs from pull request most recent head c6466dd. Consider uploading reports for the commit c6466dd to get more accurate results

@@            Coverage Diff             @@
##           master    #7135      +/-   ##
==========================================
- Coverage   91.29%   91.25%   -0.04%     
==========================================
  Files         436      436              
  Lines       23927    23935       +8     
==========================================
  Hits        21843    21843              
- Misses       2084     2092       +8     
Impacted Files Coverage Δ
torch_geometric/utils/nested.py 100.00% <ø> (ø)
torch_geometric/nn/norm/graph_norm.py 100.00% <100.00%> (ø)
torch_geometric/nn/norm/graph_size_norm.py 93.75% <100.00%> (+0.41%) ⬆️
torch_geometric/nn/norm/instance_norm.py 100.00% <100.00%> (ø)
torch_geometric/nn/norm/layer_norm.py 100.00% <100.00%> (ø)
torch_geometric/nn/norm/pair_norm.py 100.00% <100.00%> (ø)

... and 2 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@kamil-andrzejewski kamil-andrzejewski force-pushed the kamil.andrzejewski/fixed_size_norm_layers branch from b73bed7 to 906719a Compare April 7, 2023 12:08

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
It can speedup runtime because:
1) We do not need to go through the batch dimension and look for max value.
2) We do not have to read tensor value which is placed on the device.

Besides dim_size can be used if a user is using fixed size datasets.
@kamil-andrzejewski kamil-andrzejewski force-pushed the kamil.andrzejewski/fixed_size_norm_layers branch from 906719a to 1e5cc70 Compare April 7, 2023 12:22
@kamil-andrzejewski kamil-andrzejewski changed the title Add dim_size arg for norm layers Add dim_size arg for norm layers Apr 7, 2023
@rusty1s rusty1s changed the title Add dim_size arg for norm layers Add batch_size argument to normalization layers Apr 9, 2023
@rusty1s rusty1s enabled auto-merge (squash) April 9, 2023 22:15
@rusty1s rusty1s merged commit 9609b37 into pyg-team:master Apr 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants