Skip to content

Commit

Permalink
Remove legacy triton operators (fairinternal/xformers#1140)
Browse files Browse the repository at this point in the history
* Remove legacy triton operators

Those haven't been maintained in a while. Following #848

* Remove docs

__original_commit__ = fairinternal/xformers@5d7c0de
  • Loading branch information
fmassa authored and xFormers Bot committed Jun 24, 2024
1 parent b1b69fa commit 9ed9561
Show file tree
Hide file tree
Showing 95 changed files with 11 additions and 3,497 deletions.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Binary file removed docs/plots/fused_linear/FusedLinear_fp16_FW_gelu.png
Diff not rendered.
Diff not rendered.
Binary file removed docs/plots/fused_linear/FusedLinear_fp16_FW_none.png
Diff not rendered.
Binary file removed docs/plots/fused_linear/FusedLinear_fp16_FW_relu.png
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Binary file removed docs/plots/fused_linear/FusedLinear_fp32_FW_gelu.png
Diff not rendered.
Diff not rendered.
Binary file removed docs/plots/fused_linear/FusedLinear_fp32_FW_none.png
Diff not rendered.
Binary file removed docs/plots/fused_linear/FusedLinear_fp32_FW_relu.png
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Binary file removed docs/plots/layer_norm/LayerNorm_FW_torch.float16.png
Diff not rendered.
Diff not rendered.
Binary file removed docs/plots/strided_sum/Strided_sum_fp16.png
Diff not rendered.
Binary file removed docs/plots/strided_sum/Strided_sum_fp32.png
Diff not rendered.
25 changes: 0 additions & 25 deletions docs/source/custom_parts/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,31 +35,6 @@ The sparse attention computation is automatically triggered when using the **sca
There is nothing specific to do, and a couple of examples are provided in the tutorials.




Triton parts
############

1. Requirements
***************

We use Triton_ to implement the following parts.
These parts will only be visible on a CUDA-enabled machine, and Triton needs to be installed (`pip install triton`),
if any of these conditions are not met a warning is issued.


2. Possible usage
*****************

The following parts are independent and can be used as-is in any model,
provided the above limitations (Triton is installed, and there is a CUDA GPU present) are fullfilled.
They are used by default, when possible, in some of the xFormers building blocks.

.. automodule:: xformers.triton
:members:
:undoc-members:


.. _Triton: https://triton-lang.org/
.. _Sputnik: https://github.com/google-research/sputnik
.. _see: https://github.com/facebookresearch/xformers/blob/main/xformers/components/attention/scaled_dot_product.py
1 change: 0 additions & 1 deletion docs/source/tutorials/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,3 @@ Tutorials
extend_attentions
use_attention
reversible
triton
154 changes: 0 additions & 154 deletions docs/source/tutorials/triton.rst

This file was deleted.

2 changes: 1 addition & 1 deletion examples/cifar_ViT.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def __init__(
},
},
"feedforward_config": {
"name": "FusedMLP",
"name": "MLP",
"dropout": mlp_pdrop,
"activation": "gelu",
"hidden_layer_multiplier": hidden_layer_multiplier,
Expand Down
2 changes: 1 addition & 1 deletion examples/microGPT.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def __init__(
},
},
"feedforward_config": {
"name": "FusedMLP", # Use MLP if Triton is not available
"name": "MLP",
"dropout": self.hparams.mlp_pdrop,
"activation": "gelu",
"hidden_layer_multiplier": self.hparams.hidden_layer_multiplier,
Expand Down
5 changes: 1 addition & 4 deletions tests/test_pickling.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
import pytest
from torch import nn

from xformers import _is_triton_available
from xformers.factory import xFormer, xFormerConfig

test_config = [
Expand All @@ -33,7 +32,7 @@
},
},
"feedforward_config": {
"name": "FusedMLP",
"name": "MLP",
"dropout": 0.1,
"activation": "gelu",
"hidden_layer_multiplier": 4,
Expand All @@ -51,8 +50,6 @@ def __init__(self, mlp):


MLPs = ["MLP"]
if _is_triton_available():
MLPs.append("FusedMLP")


@pytest.mark.parametrize("mlp", MLPs)
Expand Down
Loading

0 comments on commit 9ed9561

Please sign in to comment.