-
-
Notifications
You must be signed in to change notification settings - Fork 353
i.evapo.mh: add test file #5519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Why isn't any of these new tests failing since the issue #5520 you reported isn't addressed yet? |
If you'd want to prove that the issue was "fixed" correctly, you could have the test added here with an expected failure, and remove it with the PR that would fix the issue (x1000 scale) |
Thanks for the feedback! |
Your more mathematical and physical approach of creating your tests remind me of the spirit of test driven development (TDD), where you write the tests first, see them fail, then implement it and see the tests pass, having confidence in those tests. In your latest PRs, you are effectively going back to the papers and define the specification, a bit like spec.js files in JavaScript/node testing. The univar tests we often use are more related to a snapshot-based regression testing. It is useful to start with, to make sure nothing else changes. Instead of a CRC or a checksum, we reduce the output, of a raster for example, to a list of univariate statistics, hoping they'll let us catch changes in the distribution of data, but allow architecture-specific and math optimisation variations without failing; thus avoiding to store complete expected data in the repo. So, if the univar results are the only blocking part, I would be comfortable to having it either 1) "broken" and skipped, then fixed in the PR fixing the bug, or 2) just added in the second PR when it is known. |
I have updated the expected statistics in the
These changes confirm that the fix is correctly applied and the output now aligns with the expected SI unit handling. |
Great work! |
And we saw the tests fail before, everything seems right now! |
This PR introduces a comprehensive regression test suite for the
i.evapo.mh
module in GRASS GIS. These tests are designed to detect unintended changes in computational results and ensure consistent evapotranspiration output under various input scenarios and module flag settings.Changes Made
Implemented tests for:
Output clamping with
-z
flag (test_z_flag_negative_values
)Ensures that negative ET values are set to zero as expected.
Precipitation exclusion with
-h
flag (test_h_flag_precipitation_ignored
)Confirms that results remain unchanged when precipitation is omitted under the
-h
flag.Alternative scaling behavior with
-s
flag (test_s_flag_different_output
)Verifies that the
-s
flag produces different output from the default mode.Hargreaves formula consistency (
test_hargreaves_formula
)Cross-validates module output against the original Hargreaves reference implementation using known inputs.
Performance
This PR significantly improves the test coverage for
i.evapo.mh
. Looking forward to feedback or suggestions for further refinement!I have found a bug in the module implementation. The attached issue is #5520