-
-
Notifications
You must be signed in to change notification settings - Fork 41
[REVIEW]: CRATE: A Python package to perform fast material simulations #5594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
|
Wordcount for |
|
Review checklist for @ExtraweichConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Review checklist for @RahulSundarConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Review checklist for @kingyin3613Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Hi @BernardoFerreira, very interesting project and good documentation. Since the check-list requires it, I think you did not add the statement of need to your Docs. You have one in the paper, though. I think it becomes clear what your Python package is used for, if one has the theoretical background. Could you please add an explicit statement of need to your Docs also? Nevertheless, there seems to be no automated testing in the workflow. @Kevin-Mattheus-Moerman, how is this usually judged? |
Hello @Extraweich, Thank you for your kind words and for taking the time to review this project! Concerning the points you raised:
|
@Extraweich, I added the Statement of Need as you requested (you may check it here and here)! |
Thank you for updating the statement of need, @BernardoFerreira. This is fully satisfactionary to me. Concerning the automated testing, I will wait for what @Kevin-Mattheus-Moerman has to say, since it is the first time I review for JOSS. I would say that I agree with your statement that analytical results are rare and I would think that the benchmark comparisons with DNS results are sufficient. Another point on my list concerning "State of the field: Do the authors describe how this software compares to other commonly-used packages?": Your paper does not really compare the functionality of CRATE with other software packages out there. This may be due to the fact, that your SCA and ASCA approaches seem to be novel within the open source community. A quick search on the internet brought FFTHomPy and fibergen to my attention. They at least share the FFT-based solution of the Lippmann-Schwinger equation with your package. I think it would be worth mentioning a few words about related packages (e.g. the packages mentioned above) and highlight what these packages do not offer, but CRATE offers. Lastly "Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support": On your Git repository you clearly state how to report issues with the software. Is it intentional that you do not want to invite others to contribute? If not, you could quickly add a "CONTRIBUTING.md" to your repository. You can check my repository if you need an example. :) |
@BernardoFerreira on automated testing. It is not a strict requirement to have fully automated testing but it is highly encouraged. What is required at a minimum is that a user can run a testing suite (e.g. manually) to compare to known results. In this case though I would encourage you to explore adding automated unit tests for instance. You mention that comparison to analytical tests may be challenging. However, it should be relatively straightforward for you to add some sort of verification and/or validation-type tests, which are usually “integration” tests because they test the overall behavior of the software, the typical sort of automated tests that are standard in Python packages (and of course other languages) are unit tests, which confirm that individual functions are working properly. So although I agree a full "test" e.g. of something like an entire CFD or FEA package may be challenging, it should be much easier to confirm that individual functions are working properly given various known inputs. So, given the above, I would encourage you to include some form of automated functionality testing / unit tests. Here are some resources our board members suggested when they helped me formulate this response: @Extraweich perhaps you can comment on the nature of the benchmarking comparisons you mentioned. Are these manual evaluations for large functional blocks (combinations) of code? Do you agree that additional and automated testing of the behaviour of functional units, as I allude to above, should still be achievable in this case? |
The package offers a fast way of calculating homogenization results in inhomogeneous materials using FFT-based methods. The benchmarks compare these results with more classical approaches, such as FEM calculations (direct numerical simulations -> DNS). Personally, I think enough comparisons are given to proof sufficient agreement with the DNS results. |
Hello @Extraweich, Concerning the points you raised:
@Extraweich, once again thank you for taking your time to review this project. Please let me know your thoughts on the points above! |
Another thought that came to mind and has nothing to do with the publishing process here. I personally am not very familiar with the methods used in CRATE. I have some basic knowledge of FFT-based solutions to the Lippmann-Schwinger equation, since Matti Schneider (whom you cite once or twice in your dissertation) is remotely involved in my own dissertation project. Nevertheless, I found it somewhat tedious to fully grasp the potential of CRATE. That's because it's certainly a complicated topic. You give a lot of well-chosen benchmarks, but they kind of run themselves, and the user might feel a bit overwhelmed by the abundance of results. What do you think about adding a Jupyter notebook with a very simple step-by-step example to give the user the opportunity to more easily understand how she/he can incorporate CRATE's functionality into her/his specific research challenges? |
Thank you for your comments and suggestions!
Regarding your suggestion:
|
@editorialbot generate pdf |
I have checked off all the items from the list. Thank you for your cooperation and for developing and providing CRATE to the open source community @BernardoFerreira. |
Thank you for your review and for helping me improving this project @Extraweich! |
@RahulSundar, @atzberg, @kingyin3613 can you please provide an update on review progress? Please tick any boxes you think can be ticked. If some cannot be ticked yet please give the authors some comments here, or in dedicated issues on their software repo, on what actions should be taken. Thanks! |
@atzberg, are you able to get started? You can call |
Yes, I have started reviewing. Thanks for the info on the command for
interactive checklist.
…On Tue, Jul 11, 2023 at 5:24 AM Kevin Mattheus Moerman < ***@***.***> wrote:
@atzberg <https://github.com/atzberg>, are you able to get started? You
can call @editorialbot generate my checklist here to get set-up. Thanks.
—
Reply to this email directly, view it on GitHub
<#5594 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACBHSZTT7WR32VL32T2ERZ3XPVAYBANCNFSM6AAAAAAZUWXI4Q>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@atzberg it says |
Done! archive is now 10.5281/zenodo.8199879 |
@editorialbot recommend accept |
I'm sorry human, I don't understand that. You can see what commands I support by typing:
|
@editorialbot recommend-accept |
|
|
@editorialbot set v1.0.3 as version |
Done! version is now v1.0.3 |
@BernardoFerreira the version tag on your repository is actually |
@editorialbot generate pdf |
@editorialbot recommend-accept |
Sorry about that, fixed it! |
👋 @openjournals/bcm-eics, this paper is ready to be accepted and published. Check final proof 👉📄 Download article If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4440, then you can now move forward with accepting the submission by compiling again with the command |
@Kevin-Mattheus-Moerman, checked the final proof, didn't find any issues! |
@editorialbot accept |
|
Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository. If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file. You can copy the contents for your CITATION.cff file here: CITATION.cff
If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation. |
🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘 |
🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨 Here's what you must now do:
Any issues? Notify your editorial technical team... |
@BernardoFerreira congratulations on this publication in JOSS!! I'd like to express my gratitude to the reviewers @RahulSundar, @atzberg, @Extraweich, @kingyin3613!! Thanks for processing this one so smoothly. |
🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉 If you would like to include a link to your paper from your README use the following code snippets:
This is how it will look in your documentation: We need your help! The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
|
Submitting author: @BernardoFerreira (Bernardo P. Ferreira)
Repository: https://github.com/bessagroup/CRATE
Branch with paper.md (empty if default branch): master
Version: v1.0.3
Editor: @Kevin-Mattheus-Moerman
Reviewers: @RahulSundar, @atzberg, @Extraweich, @kingyin3613
Archive: 10.5281/zenodo.8199879
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@RahulSundar & @atzberg & @Extraweich & @kingyin3613, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @Kevin-Mattheus-Moerman know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @RahulSundar
📝 Checklist for @atzberg
📝 Checklist for @Extraweich
📝 Checklist for @kingyin3613
The text was updated successfully, but these errors were encountered: