-
Notifications
You must be signed in to change notification settings - Fork 288
WIP: add pixel-averaging option for SPH projections #5121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
pre-commit.ci autofix |
The math behind the pixel-averaging SPH projection implementationI thought this might be useful to get some early feedback, and as a first draft for the documentation. In SPH simulations, a field where where U is a function with a dimensionless output (and input). The volume integral where the cartesian coordinates For the pixel-averaging projections, we want to integrate the field The assumption that integrals in the line-of-sight direction are limited by the SPH particle size, rather than by the projection domain, allows us to speed up the integral calculations. We get the line-of-sight integrals by interpolating tabulated values for different impact parameters, since For the projected quantity We can get the 1-D line-of-sight integrals For the integrals If a particle
|
pre-commit.ci autofix |
I don't yet have time to review this, but I just wanted to comment. Thank you for this incredibly important work on getting the ProjectionPlot functionality upgraded. I think this will have a positive impact on the work of hundreds to thousands of people. Looking forward to discussing further soon! |
pre-commit.ci autofix |
1 similar comment
pre-commit.ci autofix |
cc5ffc9
to
f1f5734
Compare
pre-commit.ci autofix |
2 similar comments
pre-commit.ci autofix |
pre-commit.ci autofix |
I was hitting a bit of a wall with testing: the baseline and I switched to testing against the pencil-beam in a smaller test; this approach gets me better subsampling faster, by simply projecting onto a higher-resolution grid than the testing target, then averaging the values in blocks of those pixels. Based on this testing, I think getting the pixel values right to high precision with this method is maybe just kind of hard... For this first test, I downsampled pixel values from grids of increasing size, then calculated the total projected mass by adding up the pixel values and multiplying by the area per pixel. The x axis shows the subsampling factor along each grid axis. These projections are the ones shown in the next plot: a simple test dataset with particles of different sizes, and different placements relative to the grid. However, the values of individual pixels do not seem to converge as rapidly. The image below is messy, as I tend to make them, but convergence does require many images. The top two rows show the pixel averaging method: first, projections of the same dataset, subsampled by the factors listed above, which increase from left to right. Black dashed lines indicate the extent of the seven SPH particles in the test dataset; this dataset was periodic in the x-axis direction, but not in the y-axis direction. The second row shows the difference between each subsampled image and the image at the right, with the highest subsampling factor. The labels at the top of the different images show the maximum absolute value of the relative difference (pixel value) / (pixel value, max. subsampling) - 1. The lowest-resolution images here show ~10%-level differences with the ones subsampled at higher resolution, which is not as good as I had hoped. For comparison, the next two rows show the same convergence test, but for the pencil-beam projections. Those definitely don't do as well, with large differences at the lowest subsampling resolution. However, even at a 64x64 subsampling rate, there are still ~1% differences here. The smallest smoothing length in the test data is 0.2, so this subsampling rate corresponds to 0.4 * 64 / 1 = 26 pixels per dimension across the kernel. The minimum in the pixel-averaging method's backend is 32 pixels per dimension. The bottom row compares the pencil-beam and pixel-averaging projections at different subsampling resolutions. The images for 128x128 subsampling pixels are identical, because the subsampling pixels here are small enough that the smallest particles in the test dataset are resolved at the minimum level. This also holds for higher resolutions I tried. tldr:
@JBorrow @neutrinoceros @chummels , any advice on what sort level of convergence we want for these pixel values? As a point of reference, I think the volume integrals are limited to a 1e-4 -- 1e-5 in 1 level accuracy by the kernel line integral approximation used in both projection methods. |
pre-commit.ci autofix |
I've taken a look at convergence of the methods with pixel size; here I showed this for a test dataset, but I did also take a look at some actual simulation data. With grids up to 800x800 pixels, it seems like cosmological volumes and zoom simulations projected over their whole parent boxes, don't converge well (yet). That's not unexpected given that the pixels are still larger than many resolution elements here, but it is worth keeping in mind. This gizmo Milky-Way disk dataset shows some convergence, even though it's no quite there yet at 800x800 pixels. The Arepo cluster projections show pretty good convergence, I think: |
I still need to write the narrative docs to go with this PR, but I would appreciate if people could start taking a look at the implementation part of this PR. I've run the tests and done the further tests that I planned to for this, but the pixel values don't agree as well with those obtained from very high pixel subsampling rates as I had hoped. @JBorrow and @AshKelly, I would particularly appreciate feedback from you on whether this is a bug that needs fixing, or whether these values just converge slowly. I'm hoping I can resolve the type check failure by rebasing after #5137 is done and merged. [grammar edit] |
One quick comment not related to the details of the implementation:
I could be wrong, but this feels like a pretty big change to drop on folks by default in the next feature release (4.5). IMO it'd be a smoother transition to not enable this by default initially: i.e., I'd prefer that next release 4.5 introduce and advertise the functionality and then the subsequent release (4.6 or 5.0?) switch it to default. |
Good point! It would indeed be better to set the default to "pencilbeam" for now. |
I've taken a brief look at the code and I can't immediately see if there is something wrong. However, if you are using the subsampling strategy that we demonstrated in Borrow & Kelly (2021), there should be no 'convergence' - you should get the same answer irrespective of your pixel size. I believe there must hence be a bug in the code. You can see in our Fig 9 that for a similar particle setup to you we are able to constrain reconstruction errors to approximately less than 0.01% in all cases. You could test this yourself against our reference implementation included in swiftsimio. You can find information on the lower level API here: https://swiftsimio.readthedocs.io/en/latest/visualisation/projection.html#lower-level-api To use that, you would do something like: # Rescale to 0, 1... then
from swiftsimio.visualisation.projection_backends.subsampled import scatter
parameters = {
"x": x,
"y": y,
"m": masses,
"h": hsml
}
output = {
x: scatter(**parameters, res=x) for x in (100, 200, 400, 800)
} Also consider comparing against For what it's worth, I believe we use a different strategy than you to determine how to subsample pixels. We define a minimum number of kernel evaluations (one-sided), this is by default 16. So if our kernel overlaps less than 32x32 pixels, we oversample those pixels. If it overlaps 16x16 < x < 32x32, each pixel is evaluated 4 (2x2) times. For 8x8 < 16x16 we do each pixel 16 (4x4) times, and so forth: https://swiftsimio.readthedocs.io/en/latest/_modules/swiftsimio/visualisation/projection_backends/subsampled.html#scatter |
@JBorrow Thanks! I was doing 1 sample per pixel if 32x32 pixels were overlapped, I'll see if a higher minimum number of evaluations works better. If not, I'll have to see if there's a deeper problem. |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
… loop, not just once
for more information, see https://pre-commit.ci
d792beb
to
b5360a0
Compare
Note: I rebased this branch to main after Chris Havelin's type annotation fixes in the SPH projection tests were merged |
pre-commit.ci autofix |
for more information, see https://pre-commit.ci
PR Summary
This PR will add an option (to be set as default) for SPH projections. The current implementation integrates the specified field over a line of sight through the center of each pixel in the grid the simulation is projected onto. This PR will add the option to average the specified field over the pixel area instead. The output units will be the same: input field units per unit pixel area (non-weighted), or input field units (weighted). This should close issue #4993.
The implementation is based on Borrow & Kelly (2021).
This PR should add this projection option for axis-aligned and off-axis projections.
PR Checklist