-
Notifications
You must be signed in to change notification settings - Fork 106
low_Mach for HLL Riemann solver and 6-equation model #833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #833 +/- ##
==========================================
- Coverage 43.15% 43.15% -0.01%
==========================================
Files 68 68
Lines 20262 20264 +2
Branches 2424 2429 +5
==========================================
- Hits 8745 8744 -1
Misses 10054 10054
- Partials 1463 1466 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Benchmark / Georgia Tech | Phoenix (NVHPC) (cpu) (pull_request) failed with the following error:
It looks like not from the PR, but from the master branch. Do you have any idea? @sbryngelson |
@hyeoksu-lee I restarted the jobs. The runners on that system have been failing spuriously lately and we haven't figured out why yet. |
@wilfonba Great, thanks! |
@hyeoksu-lee is this PR complete/ready for review? |
@sbryngelson Yes, it's ready for review. I haven't completed some GPU checklists though. The code change is minimal, so I think it barely affects GPU performance. If you want me to check them to make sure, please let me know! |
This looked good, but ChatGPT found a few things that should maybe be fixed/changed. I wouldn't implement the one that involves The divide-by-zero thing is interesting. I think it could be a good addition in the future because many NaNs start from this part of the code, broadly speaking. But I don't think this is the right PR for adding that -- it could introduce some breaking changes. See this link: https://chatgpt.com/share/6829cb31-7890-800a-a903-449b6ba5ddbd |
@sbryngelson Very interesting that ChatGPT can review the PR like this! However, I think ChatGPT misunderstood one thing here. It stated:
However, in true low Mach limits,
where |
Yeah I'm not worried about that one |
Also, I think other suggestions by ChatGPT seem not really improve the code except for redundant |
Description
This PR extends
low_Mach
feature to HLL Riemann solver and 6-equation model. Previously, it only covered HLLC Riemann solver and 5-equation model. Newly added combinations are listed belowAlso, this PR includes a minor fix to
omega_wrt(3)
in post_process. One should be able to computeomega_wrt(3)
even in 2D flow, which is not allowed in the current master branch. I found this bug as I validatelow_Mach
usingomega_wrt(3)
in 2D flow.Type of change
Please delete options that are not relevant.
Scope
If you cannot check the above box, please split your PR into multiple PRs that each have a common goal.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes.
Provide instructions so we can reproduce.
Please also list any relevant details for your test configuration
HLL+5eq: 2D Gresho vortex

For HLL + 5eq,
low_Mach == 2
does not work.HLLC+6eq: 2D mixing layer

Test Configuration:
Checklist
docs/
)examples/
that demonstrate my new feature performing as expected.They run to completion and demonstrate "interesting physics"
./mfc.sh format
before committing my codeIf your code changes any code source files (anything in
src/simulation
)To make sure the code is performing as expected on GPU devices, I have:
./mfc.sh run XXXX --gpu -t simulation --nsys
, and have attached the output file (.nsys-rep
) and plain text results to this PR./mfc.sh run XXXX --gpu -t simulation --omniperf
, and have attached the output file and plain text results to this PR.