Open
Description
rayon 1.10.0
I wrote a small Mandelbrot visualizer including this snippet:
let data: Vec<u8> = num_iterations
.par_iter()
.map(|num_iterations| (counts[*num_iterations as usize], num_iterations))
.map(|(counts_cum, num_iterations)| {
(
1.0 - f64::from(counts_cum) / (WIDTH * HEIGHT) as f64,
num_iterations,
)
})
.map(|(brightness, num_iterations)| (brightness.sqrt(), num_iterations))
.flat_map(|(mut brightness, num_iterations)| {
if *num_iterations == MAX_ITER {
brightness = 0.0;
}
[
(brightness * 255.0) as u8,
(brightness * 255.0) as u8,
(brightness * 255.0) as u8,
255u8,
]
})
.collect();
num_iterations is a Vec of length 10_000 * 10_000, counts a Vec of length 10_000. The largest memory usage I observed was around 380MB which seems to fit the math quite well: 10_000 * 10_000 * 4 bytes ~= 400MB.
I am on a 6/12 core processor, so the increase is not linear with the number of cores as I first suspected.
When I switch to using par_iter
The usage spikes to over 16GB (increase of 40x) after a couple of seconds. The runtime is also much slower than single threaded. (I was unable to time it, since it took too long on windows and kept getting OOM killed on Linux).
Incase it is useful the repo with the full code is here.
Metadata
Metadata
Assignees
Labels
No labels