Releases: Dao-AILab/flash-attention
Releases Β· Dao-AILab/flash-attention
v2.7.1.post2
[CI] Use torch 2.6.0.dev20241001, reduce torch #include
v2.7.1.post1
[CI] Fix CUDA version for torch 2.6
v2.7.1
Bump to v2.7.1
v2.7.0.post2
[CI] Pytorch 2.5.1 does not support python 3.8
v2.7.0.post1
[CI] Switch back to CUDA 12.4
v2.7.0
Bump to v2.7.0
v2.6.3
Bump to v2.6.3
v2.6.2
Bump to v2.6.2
v2.6.1
Bump to v2.6.1
v2.6.0.post1
[CI] Compile with pytorch 2.4.0.dev20240514