Skip to content

generator: loop if necessary for each FFT dim in 2D_SINGLE #606

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: develop
Choose a base branch
from

Conversation

evetsso
Copy link
Contributor

@evetsso evetsso commented Jun 17, 2025

Previously, workgroup size for 2D_SINGLE kernels was chosen such that all FFTs in each dimension could be done with a single workgroup. But this becomes infeasible as dimensions get larger.

So instead, loop in the generator to perform enough FFTs for each dimension using the available threads. As a side effect this means the extra FFT for even-length real-complex can be done by existing threads rather than making the first N threads do it while the rest wait.

This PR shouldn't materially affect generated code, since all existing 2D_SINGLE configs use max(length0*tpt1,length1*tpt0) for workgroup size. So complex-complex kernels don't need to loop.

Real-complex kernels are different though - they'll need an extra transform to be done in one direction or the other. Prior to this PR, we'd just make the first tpt0 or tpt1 threads do that extra transform while the rest waited. Now, it's possible in some cases for us to have enough extra threads to not have to do an extra iteration. That should be a small improvement but I haven't been able to observe it in our benchmark suite.

Previously, workgroup size for 2D_SINGLE kernels was chosen such that all
FFTs in each dimension could be done with a single workgroup.  But this
becomes infeasible as dimensions get larger.

So instead, loop in the generator to perform enough FFTs for each
dimension using the available threads.  As a side effect this means the
extra FFT for even-length real-complex can be done by existing threads
rather than making the first N threads do it while the rest wait.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants