Skip to content

Commit

Permalink
Build for PyTorch 2.2.0
Browse files Browse the repository at this point in the history
ghstack-source-id: dddb9bb5759b00cbae2a80384a011ded7240b0a3
Pull Request resolved: https://github.com/fairinternal/xformers/pull/1023

__original_commit__ = fairinternal/xformers@1b54dbf7976fd8975e531dbcdd68e8c93871ad2f
  • Loading branch information
xFormers Bot committed Jan 30, 2024
1 parent 6fdfa0a commit f7e46d5
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 9 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,13 +33,13 @@ jobs:
- "3.9"
- "3.10"
config:
- torch_version: "2.1.2"
- torch_version: "2.2.0"
torch_channel: "pytorch"
cuda_version: "12.1.0"
cuda_dep_runtime: ">=12.0,<13.0"
cuda_short_version: "121"

- torch_version: "2.1.2"
- torch_version: "2.2.0"
torch_channel: "pytorch"
cuda_version: "11.8.0"
cuda_dep_runtime: ">=11.7,<11.9"
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/wheels.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jobs:
- "3.10"
- "3.11"
torch_version:
- "2.1.2"
- "2.2.0"
cuda_short_version:
- "118"
- "121"
Expand All @@ -45,7 +45,7 @@ jobs:
uses: ./.github/workflows/wheels_upload_pip.yml
with:
twine_username: __token__
filter: "*torch2.1.2+cu121*"
filter: "*torch2.2.0+cu121*"
execute: ${{ github.repository == 'facebookresearch/xformers' && github.event_name != 'pull_request' }}
secrets:
twine_password: ${{ secrets.PYPI_TOKEN }}
Expand All @@ -57,7 +57,7 @@ jobs:
aws_role: "arn:aws:iam::749337293305:role/pytorch_bot_uploader_role"
s3_path: s3://pytorch/whl/cu118/
aws_s3_cp_extra_args: --acl public-read
filter: "*torch2.1.2+cu118*"
filter: "*torch2.2.0+cu118*"
execute: ${{ github.repository == 'facebookresearch/xformers' && github.ref_type == 'tag' }}

upload_pt_cu121:
Expand All @@ -67,6 +67,6 @@ jobs:
aws_role: "arn:aws:iam::749337293305:role/pytorch_bot_uploader_role"
s3_path: s3://pytorch/whl/cu121/
aws_s3_cp_extra_args: --acl public-read
filter: "*torch2.1.2+cu121*"
filter: "*torch2.2.0+cu121*"
execute: ${{ github.repository == 'facebookresearch/xformers' && github.ref_type == 'tag' }}

4 changes: 3 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,18 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.0.24] - TBD
Pre-built binary wheels require PyTorch 2.2.0
### Added
- Added components for model/sequence parallelism, as near-drop-in replacements for FairScale/Megatron Column&RowParallelLinear modules. They support fusing communication and computation for sequence parallelism, thus making the communication effectively free.
- Added kernels for training models with 2:4-sparsity. We introduced a very fast kernel for converting a matrix A into 24-sparse format, which can be used during training to sparsify weights dynamically, activations etc... xFormers also provides an API that is compatible with torch-compile, see `xformers.ops.sparsify24`.
### Improved
- Make selective activation checkpointing be compatible with torch.compile.
### Removed
- Triton kernels now require a GPU with compute capability 8.0 at least (A100 or newer). This is due to newer versions of triton not supporting older GPUs correctly
- Removed support for PyTorch version older than 2.1.0

## [0.0.23] - 2023-12-05
Pre-built binary wheels require PyTorch 2.1.1
Pre-built binary wheels require PyTorch 2.1.1 (xFormers `0.0.23`) or PyTorch 2.1.2 (xFormers `0.0.23.post1`).
### Fixed
- fMHA: Fixed a bug in cutlass backend forward pass where the logsumexp was not correctly calculated, resulting in wrong results in the BW pass. This would happen with MQA when one sequence has a query with `length%64 == 1`
- fMHA: Updated Flash-Attention to v2.3.6 - this fixes a performance regression in causal backward passes, and now supports `BlockDiagonalCausalWithOffsetPaddedKeysMask`
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,13 @@ xFormers is:

## Installing xFormers

* **(RECOMMENDED, linux) Install latest stable with conda**: Requires [PyTorch 2.1.2 installed with conda](https://pytorch.org/get-started/locally/)
* **(RECOMMENDED, linux) Install latest stable with conda**: Requires [PyTorch 2.2.0 installed with conda](https://pytorch.org/get-started/locally/)

```bash
conda install xformers -c xformers
```

* **(RECOMMENDED, linux & win) Install latest stable with pip**: Requires [PyTorch 2.1.2](https://pytorch.org/get-started/locally/)
* **(RECOMMENDED, linux & win) Install latest stable with pip**: Requires [PyTorch 2.2.0](https://pytorch.org/get-started/locally/)

```bash
# cuda 11.8 version
Expand Down

0 comments on commit f7e46d5

Please sign in to comment.