Evaluating a field at certain indices in a GPU simulation #4052
-
HI there! I am trying to compute a term that has variables in it that must be evaluated at a depth z=h(x,y). That is: term(x,y) = f(x, y, h(x, y)). In this case, h is just the mixed-layer depth, which I define to be the depth of maximum salinity gradient. So, I managed to write some code to do this on GPU, but it involves a lot of indexing and use of More details on my implementation: Say term(x, y) = var(x, y, h). I define a field:
and then the term is computed elsewhere and saved to a netCDF. The indexing in the |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
First you can try max_dSdz = maximum(∂z(model.tracers.S), dims=3) or max_dSdz = maximum(Field(∂z(model.tracers.S)), dims=3) then If you want to form an object that can be repeatedly max_dSdz_reduction = Reduction(maximum!, ∂z(model.tracers.S), dims=3)
max_dSdz = Field(max_dSdz_reduction)
compute!(max_dSdz) To go further with custom kernels I suggest checking out the implementation of https://github.com/CliMA/ClimaOcean.jl/blob/main/src/Diagnostics/mixed_layer_depth.jl |
Beta Was this translation helpful? Give feedback.
First you can try
or
then
max_dSdz
should be a 2D field.If you want to form an object that can be repeatedly
compute!
'd then I think you want to useReduction
withmaximum!
(see the docstring forReduction
):To go further with custom kernels I suggest checking out the implementation of
MixedLayerDepthOperand
andMixedLayerDepthField
here:https://github.com/CliMA/ClimaOcean.jl/blob/main/src/Diagnostics/mixed_layer_depth.jl