Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add torch2.1.0 checking in CI #2955

Merged
merged 8 commits into from
Oct 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .circleci/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
# https://github.com/pytorch/pytorch/issues/37377
ENV MKL_THREADING_LAYER GNU

ARG DEBIAN_FRONTEND=noninteractive

# To fix GPG key error when running apt-get update
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
Expand Down
24 changes: 15 additions & 9 deletions .circleci/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -169,16 +169,22 @@ jobs:
type: string
cuda:
type: enum
enum: ["10.1", "10.2", "11.1", "11.7"]
enum: ["10.1", "10.2", "11.1", "11.7", "11.8"]
cudnn:
type: integer
default: 7
machine:
image: ubuntu-2004-cuda-11.4:202110-01
image: linux-cuda-11:default
docker_layer_caching: true
resource_class: gpu.nvidia.small
resource_class: gpu.nvidia.small.multi
steps:
- checkout
- run:
name: Install nvidia-container-toolkit and Restart Docker
command: |
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
- run:
name: Build Docker image
command: |
Expand Down Expand Up @@ -240,8 +246,8 @@ workflows:
- build_without_ops
- build_cpu:
name: maximum_version_cpu
torch: 2.0.0
torchvision: 0.15.1
torch: 2.1.0
torchvision: 0.16.0
python: 3.9.0
requires:
- minimum_version_cpu
Expand All @@ -257,10 +263,10 @@ workflows:
- hold_cuda_test
- build_cuda:
name: maximum_version_gpu
torch: 2.0.0
torch: 2.1.0
# Use double quotation mark to explicitly specify its type
# as string instead of number
cuda: "11.7"
cuda: "11.8"
cudnn: 8
requires:
- hold_cuda_test
Expand All @@ -281,10 +287,10 @@ workflows:
- main
- build_cuda:
name: maximum_version_gpu
torch: 2.0.0
torch: 2.1.0
# Use double quotation mark to explicitly specify its type
# as string instead of number
cuda: "11.7"
cuda: "11.8"
cudnn: 8
filters:
branches:
Expand Down
8 changes: 6 additions & 2 deletions .github/workflows/build_macos_wheel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ concurrency:

jobs:
build_macos10_wheel:
runs-on: macos-10.15
runs-on: macos-latest
if: contains(github.event.head_commit.message, 'Bump version to')
strategy:
matrix:
torch: [1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0, 2.0.0]
torch: [1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0, 2.0.0, 2.1.0]
python-version: [3.7, 3.8, 3.9, '3.10', '3.11']
include:
- torch: 1.8.0
Expand All @@ -29,6 +29,8 @@ jobs:
torchvision: 0.14.0
- torch: 2.0.0
torchvision: 0.15.1
- torch: 2.1.0
torchvision: 0.16.0
exclude:
- torch: 1.8.0
python-version: '3.10'
Expand All @@ -52,6 +54,8 @@ jobs:
python-version: '3.11'
- torch: 2.0.0
python-version: 3.7
- torch: 2.1.0
python-version: 3.7
steps:
- uses: actions/checkout@v2
- name: Set up Python
Expand Down
19 changes: 12 additions & 7 deletions .github/workflows/merge_stage_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ jobs:
strategy:
matrix:
python-version: [3.7]
torch: [1.8.1, 1.9.1, 1.10.1, 1.11.0, 1.12.0, 1.13.0, 2.0.0]
torch: [1.8.1, 1.9.1, 1.10.1, 1.11.0, 1.12.0, 1.13.0, 2.0.0, 2.1.0]
include:
- torch: 1.8.1
torchvision: 0.9.1
Expand All @@ -131,9 +131,14 @@ jobs:
- torch: 2.0.0
torchvision: 0.15.1
python-version: 3.8
- torch: 2.1.0
torchvision: 0.16.0
python-version: 3.8
exclude:
- torch: 2.0.0
python-version: 3.7
- torch: 2.1.0
python-version: 3.7
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
Expand Down Expand Up @@ -311,13 +316,13 @@ jobs:
runs-on: windows-2019
strategy:
matrix:
torch: [1.8.1, 2.0.0]
torch: [1.8.1, 2.1.0]
include:
- torch: 1.8.1
torchvision: 0.9.1
python-version: 3.7
- torch: 2.0.0
torchvision: 0.15.1
- torch: 2.1.0
torchvision: 0.16.0
python-version: 3.8
steps:
- uses: actions/checkout@v2
Expand All @@ -343,13 +348,13 @@ jobs:
runs-on: macos-latest
strategy:
matrix:
torch: [1.8.1, 2.0.0]
torch: [1.8.1, 2.1.0]
include:
- torch: 1.8.1
torchvision: 0.9.1
python-version: 3.7
- torch: 2.0.0
torchvision: 0.15.1
- torch: 2.1.0
torchvision: 0.16.0
python-version: 3.8
steps:
- uses: actions/checkout@v2
Expand Down
12 changes: 6 additions & 6 deletions .github/workflows/pr_stage_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -132,13 +132,13 @@ jobs:
runs-on: windows-2019
strategy:
matrix:
torch: [1.8.1, 2.0.0]
torch: [1.8.1, 2.1.0]
include:
- torch: 1.8.1
torchvision: 0.9.1
python-version: 3.7
- torch: 2.0.0
torchvision: 0.15.1
- torch: 2.1.0
torchvision: 0.16.0
python-version: 3.8
steps:
- uses: actions/checkout@v2
Expand All @@ -164,13 +164,13 @@ jobs:
runs-on: macos-latest
strategy:
matrix:
torch: [1.8.1, 2.0.0]
torch: [1.8.1, 2.1.0]
include:
- torch: 1.8.1
torchvision: 0.9.1
python-version: 3.7
- torch: 2.0.0
torchvision: 0.15.1
- torch: 2.1.0
torchvision: 0.16.0
python-version: 3.8
steps:
- uses: actions/checkout@v2
Expand Down
2 changes: 1 addition & 1 deletion mmcv/cnn/bricks/generalized_attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -371,7 +371,7 @@ def forward(self, x_input: torch.Tensor) -> torch.Tensor:
contiguous().\
view(1, 1, h*w, h_kv*w_kv)

energy = energy.masked_fill_(cur_local_constraint_map,
energy = energy.masked_fill_(cur_local_constraint_map.bool(),
float('-inf'))

attention = F.softmax(energy, 3)
Expand Down
Loading