-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce size_of<HashMap> ? #69
Comments
This includes a `size_of` regression for a few DOM types, due to rust-lang/rust#58623 which replaces the implementation of `HashMap` in the standard library to Hashbrown. Although `size_of<HashMap>` grows, it’s not obvious how total memory usage is going to be impacted: Hashbrown only has one `u8` instead of one `usize` of overhead per hash table bucket for storing (part of) a hash, and so might allocate less memory itself. Hashbrown also typically has better run time performance: https://github.com/rust-lang/hashbrown#performance Still, I’ve filed rust-lang/hashbrown#69 about potentially reducing the `size_of<HashMap>` regression.
The |
Upgrade to rustc 1.36.0-nightly (e305df184 2019-04-24) This includes a `size_of` regression for a few DOM types, due to rust-lang/rust#58623 which replaces the implementation of `HashMap` in the standard library to Hashbrown. Although `size_of<HashMap>` grows, it’s not obvious how total memory usage is going to be impacted: Hashbrown only has one `u8` instead of one `usize` of overhead per hash table bucket for storing (part of) a hash, and so might allocate less memory itself. Hashbrown also typically has better run time performance: https://github.com/rust-lang/hashbrown#performance Still, I’ve filed rust-lang/hashbrown#69 about potentially reducing the `size_of<HashMap>` regression. <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/servo/servo/23263) <!-- Reviewable:end -->
Some Hashers, and FxHasher in particular, mix the low bits poorly; so only use the high bits for both hashes. Instead of introducing bucket_shift next to bucket_mask in the top-level struct, derive them both from capacity_log2. This avoids exacerbating struct size in rust-lang#69. This change is also a prerequisite for rayon-based parallel collect, which requires that the high bits of the index are always in the same place regardless of table size. name old ns/iter new ns/iter diff ns/iter diff % speedup find_existing 0 0 0 NaN% x NaN find_existing_high_bits 84,320 3,442 -80,878 -95.92% x 24.50 find_nonexisting 0 0 0 NaN% x NaN get_remove_insert 25 29 4 16.00% x 0.86 grow_by_insertion 205 209 4 1.95% x 0.98 grow_by_insertion_kb 290 180 -110 -37.93% x 1.61 hashmap_as_queue 25 26 1 4.00% x 0.96 insert_8_char_string 18,038 17,491 -547 -3.03% x 1.03 new_drop 0 0 0 NaN% x NaN new_insert_drop 45 50 5 11.11% x 0.90
Fixes rust-lang#69 to the extent possible without more dramatic changes.
I think it would be much more interesting to investigate moving to a layout where all of the fields are in the same allocation as the keys and hashes, with an empty singleton to avoid allocating empty maps. As is done with thin-vec and BTreeMap. This would make it much cheaper to move HashMaps around, and would dramatically reduce the memory footprint of things which only sometimes actually create a real map. ( It's non-trivial work though, because it requires carefully ensuring you never mess up the CoW nature of the singleton, as well as carefully ensuring that you never make the size/align mismatch the allocation. That said I think it might be possible, as we should be able to avoid ever looking at the keys for a static ctrl array that's all EMPTY. The BuildHasher is the most annoying part here I think. I think it has to be inline. But that's fine for everyone using FxHasher or any other BuildHasherDefault, which is zero-sized. Sadly RandomState (std's default) needs 16 bytes for the seed. |
As an aside this would be an exciting experiment because we would have a very optimized implementation to compare the performance difference of this "trick" to in an apples-to-apples way. I'm not aware of any detailed analysis of the second and third-order effects of this optimization, like cache locality and whether it upsets the optimizer, which have historically been reasons to hand-wring over it. |
With the switch to Hashbrow,
std::mem::size_of::<std::collections::HashMap<(), ()>>()
on 64-bit platforms grew from 40 bytes to 56. (24 to 40 bytes withBuildHasherDefault
.)In Servo’s DOM implementation we have types whose
size_of
is several hundreds of bytes. Because some not-so-unusual pages can have very many DOM nodes this size can add up to significant memory usage. We have unit tests forsize_of
to ensure it does not accidentally grow, which fail in today’s Rust Nightly because several types grew by 16 bytes because they contain aHashMap
.Hashbrown’s
HashMap
contains aRawTable
which has five pointer-sized fields:hashbrown/src/raw/mod.rs
Lines 328 to 348 in 7e79b0c
Some of them seem potentially redundant, but I’m not sure. For example there are two pointers that seem to be in the same allocation. How expensive would it be to re-compute the second pointer every time it is needed?
Are
bucket_mask
,growth_left
, andlen
related such that one of them could be computed from the others?The text was updated successfully, but these errors were encountered: