Replies: 1 comment 2 replies
-
Could we move this issue to the pyperformance repo? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
It's becoming obvious that:
It seems that one way to address this would be to lean into "tags" more in the pyperformance/pyperf ecosystem. pyperformance already allows for tags in each benchmark's
pyproject.yaml
.I propose we:
metadata
dictionary.pyperf
compare_to
would then calculate the geometric mean for each subset of benchmarks for each tag found in the results, as well as "all" benchmarks (existing behavior). This could be behind a flag if backward compatibility matters.Alternatives:
We could instead use the nested benchmark heirarchy, rather than tags. Personally, I think tags is easier to understand and more flexible (a benchmark could be associated with multiple tags).
Beta Was this translation helpful? Give feedback.
All reactions