Skip to content

Commit 5231d95

Browse files
committed
Drop Pytorch 2.1
1 parent 979702c commit 5231d95

File tree

3 files changed

+5
-10
lines changed

3 files changed

+5
-10
lines changed

.github/workflows/publish.yml

+3-8
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ jobs:
4444
# manylinux docker image, but I haven't figured out how to install CUDA on manylinux.
4545
os: [ubuntu-20.04]
4646
python-version: ['3.9', '3.10', '3.11', '3.12', '3.13']
47-
torch-version: ['2.1.2', '2.2.2', '2.3.1', '2.4.0', '2.5.1', '2.6.0']
47+
torch-version: ['2.2.2', '2.3.1', '2.4.0', '2.5.1', '2.6.0']
4848
cuda-version: ['12.4.1']
4949
# We need separate wheels that either uses C++11 ABI (-D_GLIBCXX_USE_CXX11_ABI) or not.
5050
# Pytorch wheels currently don't use it, but nvcr images have Pytorch compiled with C++11 ABI.
@@ -53,12 +53,7 @@ jobs:
5353
cxx11_abi: ['FALSE', 'TRUE']
5454
exclude:
5555
# see https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix
56-
# Pytorch < 2.2 does not support Python 3.12
57-
- torch-version: '2.1.2'
58-
python-version: '3.12'
5956
# Pytorch < 2.5 does not support Python 3.13
60-
- torch-version: '2.1.2'
61-
python-version: '3.13'
6257
- torch-version: '2.2.2'
6358
python-version: '3.13'
6459
- torch-version: '2.3.1'
@@ -122,8 +117,8 @@ jobs:
122117
# see https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix
123118
# This code is ugly, maybe there's a better way to do this.
124119
export TORCH_CUDA_VERSION=$(python -c "from os import environ as env; \
125-
minv = {'2.1': 118, '2.2': 118, '2.3': 118, '2.4': 118, '2.5': 118, '2.6': 118}[env['MATRIX_TORCH_VERSION']]; \
126-
maxv = {'2.1': 121, '2.2': 121, '2.3': 121, '2.4': 124, '2.5': 124, '2.6': 124}[env['MATRIX_TORCH_VERSION']]; \
120+
minv = {'2.2': 118, '2.3': 118, '2.4': 118, '2.5': 118, '2.6': 118}[env['MATRIX_TORCH_VERSION']]; \
121+
maxv = {'2.2': 121, '2.3': 121, '2.4': 124, '2.5': 124, '2.6': 124}[env['MATRIX_TORCH_VERSION']]; \
127122
print(minv if int(env['MATRIX_CUDA_VERSION']) < 120 else maxv)" \
128123
)
129124
if [[ ${{ matrix.torch-version }} == *"dev"* ]]; then

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ flash_attn_interface.flash_attn_func()
6565
## Installation and features
6666
**Requirements:**
6767
- CUDA toolkit or ROCm toolkit
68-
- PyTorch 2.1 and above.
68+
- PyTorch 2.2 and above.
6969
- `packaging` Python package (`pip install packaging`)
7070
- `ninja` Python package (`pip install ninja`) *
7171
- Linux. Might work for Windows starting v2.3.2 (we've seen a few positive [reports](https://github.com/Dao-AILab/flash-attention/issues/595)) but Windows compilation still requires more testing. If you have ideas on how to set up prebuilt CUDA wheels for Windows, please reach out via Github issue.

flash_attn/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
__version__ = "2.7.4"
1+
__version__ = "2.7.4.post1"
22

33
from flash_attn.flash_attn_interface import (
44
flash_attn_func,

0 commit comments

Comments
 (0)