Skip to content

Commit

Permalink
[ci] Make sure we don't initialize CUDA context at import time (fairi…
Browse files Browse the repository at this point in the history
  • Loading branch information
danthe3rd authored and xFormers Bot committed Nov 4, 2024
1 parent de742ec commit 1277989
Showing 1 changed file with 5 additions and 0 deletions.
5 changes: 5 additions & 0 deletions .github/workflows/gpu_test_gh.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,11 @@ jobs:
pip install -r requirements-test.txt --progress-bar off
- run: TORCH_CUDA_ARCH_LIST=${{ matrix.gpu.sm }} python setup.py develop
- run: python -m xformers.info
- name: xFormers import should not init cuda context
run: |
# NOTE: we check GPU version by default to determine if triton should be used
# and this initializes CUDA context, unless we set `XFORMERS_ENABLE_TRITON`
XFORMERS_ENABLE_TRITON=1 python -c "import xformers; import xformers.ops; import torch; assert not torch.cuda.is_initialized()"
- name: Unit tests
run: |
python -m pytest --verbose --random-order-bucket=global --maxfail=20 --junitxml=test-results/junit.xml --cov-report=xml --cov=./ tests
Expand Down

0 comments on commit 1277989

Please sign in to comment.