-
-
Notifications
You must be signed in to change notification settings - Fork 31.8k
Caching the tuple hash calculation speeds up some code significantly #131525
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
interpreter-core
(Objects, Python, Grammar, and Parser dirs)
performance
Performance or resource usage
type-feature
A feature request or enhancement
Comments
mdboom
added a commit
to mdboom/cpython
that referenced
this issue
Mar 20, 2025
mdboom
added a commit
to mdboom/cpython
that referenced
this issue
Mar 20, 2025
This would nicely simplify and speed-up the pure python implementation of |
mdboom
added a commit
that referenced
this issue
Mar 27, 2025
* gh-131525: Cache the result of tuple_hash * Fix debug builds * Add blurb * Fix formatting * Pre-compute empty tuple singleton * Mostly set the cache within tuple_alloc * Fixes for TSAN * Pre-compute empty tuple singleton * Fix for 32-bit platforms * Assert that op != NULL in _PyTuple_RESET_HASH_CACHE * Use FT_ATOMIC_STORE_SSIZE_RELAXED macro * Update Include/internal/pycore_tuple.h Co-authored-by: Bénédikt Tran <[email protected]> * Fix alignment * atomic load * Update Objects/tupleobject.c Co-authored-by: Chris Eibl <[email protected]> --------- Co-authored-by: Bénédikt Tran <[email protected]> Co-authored-by: Chris Eibl <[email protected]>
lgeiger
added a commit
to lgeiger/cpython
that referenced
this issue
Mar 31, 2025
lgeiger
added a commit
to lgeiger/cpython
that referenced
this issue
Mar 31, 2025
rhettinger
pushed a commit
that referenced
this issue
Mar 31, 2025
diegorusso
pushed a commit
to diegorusso/cpython
that referenced
this issue
Apr 1, 2025
* pythongh-131525: Cache the result of tuple_hash * Fix debug builds * Add blurb * Fix formatting * Pre-compute empty tuple singleton * Mostly set the cache within tuple_alloc * Fixes for TSAN * Pre-compute empty tuple singleton * Fix for 32-bit platforms * Assert that op != NULL in _PyTuple_RESET_HASH_CACHE * Use FT_ATOMIC_STORE_SSIZE_RELAXED macro * Update Include/internal/pycore_tuple.h Co-authored-by: Bénédikt Tran <[email protected]> * Fix alignment * atomic load * Update Objects/tupleobject.c Co-authored-by: Chris Eibl <[email protected]> --------- Co-authored-by: Bénédikt Tran <[email protected]> Co-authored-by: Chris Eibl <[email protected]>
seehwan
pushed a commit
to seehwan/cpython
that referenced
this issue
Apr 16, 2025
* pythongh-131525: Cache the result of tuple_hash * Fix debug builds * Add blurb * Fix formatting * Pre-compute empty tuple singleton * Mostly set the cache within tuple_alloc * Fixes for TSAN * Pre-compute empty tuple singleton * Fix for 32-bit platforms * Assert that op != NULL in _PyTuple_RESET_HASH_CACHE * Use FT_ATOMIC_STORE_SSIZE_RELAXED macro * Update Include/internal/pycore_tuple.h Co-authored-by: Bénédikt Tran <[email protected]> * Fix alignment * atomic load * Update Objects/tupleobject.c Co-authored-by: Chris Eibl <[email protected]> --------- Co-authored-by: Bénédikt Tran <[email protected]> Co-authored-by: Chris Eibl <[email protected]>
seehwan
pushed a commit
to seehwan/cpython
that referenced
this issue
Apr 16, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
interpreter-core
(Objects, Python, Grammar, and Parser dirs)
performance
Performance or resource usage
type-feature
A feature request or enhancement
Proposal:
Back in 2013, it was determined that caching the result of
tuple_hash
did not have any significant speedup.However, a lot has changed since then, and in a recent experiment to add a tuple hash cache back in, the mdp benchmark increased by 86%.
Admittedly, there was no measurable improvement on any other benchmark, but it also seems to have no downside, including for memory usage when measured with
max_rss
.Has this already been discussed elsewhere?
No response given
Links to previous discussion of this feature:
Linked PRs
_HashedSeq
wrapper fromlru_cache
#131922The text was updated successfully, but these errors were encountered: