Allow TMA benchmarks for flex-attention kernel #225
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
This diff adds a new argument
--use-tma
to theoperator.py
file in theflex_attention
directory of thetritonbench
repository. This argument allows users to enable Tensor Memory Access (TMA) in kernel options for flex-attention benchmarks.Changes:
--use-tma
argument to theparse_args
function inoperator.py
parse_args
function to store the--use-tma
value in theargs
objectDifferential Revision: D74839480