Skip to content

[REVIEW]: Disimpy: A massively parallel Monte Carlo simulator for synthesizing diffusion-weighted MRI data in Python #2527

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
60 tasks done
whedon opened this issue Jul 28, 2020 · 49 comments
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX

Comments

@whedon
Copy link

whedon commented Jul 28, 2020

Submitting author: @kerkelae (Leevi Kerkelä)
Repository: https://github.com/kerkelae/disimpy
Version: v0.1.1
Editor: @Kevin-Mattheus-Moerman
Reviewer: @DARSakthi, @grlee77, @ritagnunes
Archive: 10.5281/zenodo.4001687

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/ee3aa136859cb47595de7de8eb84b441"><img src="https://joss.theoj.org/papers/ee3aa136859cb47595de7de8eb84b441/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/ee3aa136859cb47595de7de8eb84b441/status.svg)](https://joss.theoj.org/papers/ee3aa136859cb47595de7de8eb84b441)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@DARSakthi & @grlee77 & @ritagnunes, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @Kevin-Mattheus-Moerman know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @DARSakthi

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@kerkelae) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @grlee77

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@kerkelae) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @ritagnunes

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@kerkelae) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Jul 28, 2020

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @DARSakthi, @grlee77, @ritagnunes it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jul 28, 2020

Reference check summary:

OK DOIs

- None

MISSING DOIs

- https://doi.org/10.1007/978-3-642-15745-5_50 may be missing for title: High-fidelity meshes from tissue samples for diffusion MRI simulations
- https://doi.org/10.1016/j.jmr.2012.10.015 may be missing for title: Isotropic diffusion weighting in PGSE NMR by magic-angle spinning of the q-vector
- https://doi.org/10.1063/1.1695690 may be missing for title: Spin diffusion measurements: spin echoes in the presence of a time-dependent field gradient
- https://doi.org/10.1109/mcse.2011.37 may be missing for title: The NumPy array: a structure for efficient numerical computation
- https://doi.org/10.1109/tmi.2009.2015756 may be missing for title: Convergence and parameter choice for Monte-Carlo simulations of diffusion MRI
- https://doi.org/10.1016/j.neuroimage.2007.02.016 may be missing for title: Robust determination of the fibre orientation distribution in diffusion MRI: non-negativity constrained super-resolved spherical deconvolution
- https://doi.org/10.1002/nbm.3998 may be missing for title: Quantifying brain microstructure with diffusion MRI: Theory and parameter estimation
- https://doi.org/10.1101/140459 may be missing for title: The role of diffusion MRI in neuroscience
- https://doi.org/10.1109/hotchips.2008.7476516 may be missing for title: Scalable parallel programming with CUDA

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Jul 28, 2020

@kerkelae
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Jul 28, 2020

@kerkelae
Copy link

@whedon check references

@whedon
Copy link
Author

whedon commented Jul 28, 2020

Reference check summary:

OK DOIs

- 10.1007/978-3-642-15745-5_50 is OK
- 10.1016/j.jmr.2015.10.012 is OK
- 10.1016/j.jmr.2012.10.015 is OK
- 10.1063/1.1695690 is OK
- 10.1145/2833157.2833162 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/tmi.2009.2015756 is OK
- 10.1016/j.neuroimage.2007.02.016 is OK
- 10.1002/nbm.3998 is OK
- 10.1101/140459 is OK
- 10.1109/hotchips.2008.7476516 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@Kevin-Mattheus-Moerman
Copy link
Member

@DARSakthi, @grlee77, @ritagnunes this is just a friendly check in. I know some of you are busy on other reviews too. Could you give an update on when you expect to work on this review?

@ritagnunes
Copy link

ritagnunes commented Aug 5, 2020 via email

@Kevin-Mattheus-Moerman
Copy link
Member

Right, sorry I forgot about the ISMRM. Enjoy the conference!

@grlee77
Copy link

grlee77 commented Aug 5, 2020

@kerkelae, can you briefly describe the contributions of the coauthors you included? I could only verify your name directly in the commit history, but understand that there are other ways to contribute.

@DARSakthi
Copy link

@Kevin-Mattheus-Moerman Thanks for the reminder -- some other deadlines had dragged my attention away -- will have it done by Monday, latest. Cheers

@grlee77
Copy link

grlee77 commented Aug 5, 2020

regarding the following review criteria:

If there are any performance claims of the software, have they been confirmed?

Is the script that produced Fig. 1 available anywhere? I find the result believable, but did not try to install Camino to verify. I can confirm that all of the examples in the Jupyter notebook in the repository ran quickly on an NVIDIA GTX-1080 Ti card.
@Kevin-Mattheus-Moerman: Is this adequate verification for JOSS?

update: I just added some timing statements to examples for the mesh in the Jupyter notebook examples and the result is consistent with the Disimpy curve in the figure.

@grlee77
Copy link

grlee77 commented Aug 5, 2020

Overall the manuscript and software look good to me and I have only some minor suggestions.

minor points to address

1.) In the NumPy reference, the first author should be "van der Walt, S." rather than "Walt, S. van der"

2.) In the second sentence under features, I recommend the following grammatical change:
"generated in massively parallel on" -> "generated in a massively parallel fashion on"

3.) In the documentation's installation instructions a one-step command that might be easier for users than downloading the repository and then navigating to the folder to run pip would be to use:
pip install git+https://github.com/kerkelae/disimpy.git
(pip will take care of downloading the repository for the user)

4.) It may be worth providing a version of the demo notebook on Google Colaboratory as well, so users can try it out without having to own a CUDA-compatible GPU.

5.) Can the authors confirm if Camino's CPU-based Monte-Carlo is multi-threaded? If so, I would mention that explicitly in the manuscript text or Fig. 1 caption as it makes the relative improvement on the GPU more impressive.

6.) Regarding the "state of the filed" review criteria, I don't see much in the manuscript corresponding to this aside from the comparison to Camino. This is not an area I am particular familiar with the literature, so it would be good to know:

  • is Monte-Carlo is the dominant method of doing these simulations or are there other popular approaches?
  • are there other popular tools aside from Camino for doing this?

7.) The included references look good, but I would include also reference previous work doing similar types of simulations on the GPU. For example:

Khieu-Van Nguyen, Edwin Hernández-Garzón, Julien Valette, Efficient GPU-based Monte-Carlo simulation of diffusion in real astrocytes reconstructed from confocal microscopy, Journal of Magnetic Resonance, Volume 296, 2018, Pages 188-199, https://doi.org/10.1016/j.jmr.2018.09.013.

Christopher A. Waudby, John Christodoulou, GPU accelerated Monte Carlo simulation of pulsed-field gradient NMR experiments, Journal of Magnetic Resonance, Volume 211, Issue 1, 2011, Pages 67-73, https://doi.org/10.1016/j.jmr.2011.04.004.

(optional)
It may also be worth mentioning other complementary software that could generate more realistic tissue models as input to the simulations. I am not familiar with specific tools for this, but am aware of the following publication (unfortunately the implementation described in it does not appear to be publicly available):

Kévin Ginsburger, Felix Matuschke, Fabrice Poupon, Jean-François Mangin, Markus Axer, Cyril Poupon,
MEDUSA: A GPU-based tool to create realistic phantoms of the brain microstructure using tiny spheres,
NeuroImage, Volume 193, 2019, Pages 10-24, https://doi.org/10.1016/j.neuroimage.2019.02.055

misc comments
Not having continuous integration seems okay for this library as the free CI services do not offer the needed GPU hardware.

Validation of accuracy of the simulations was not discussed in the manuscript, but from what I can tell, the authors have done validation via the following:
1.) examples in the tutorials:
- free water diffusion matches the analytical solution
- diffraction peaks generated as in Avram et. al. 2008
2.) via tests/test_simulations.py
- some test cases compare to stored outputs of Camino
- mesh-based sphere gives similar result to an analytical sphere

@fnery
Copy link

fnery commented Aug 6, 2020

Hi @grlee77, thanks for the feedback. Just a quick comment in terms of direct commits to the repo itself I didn't make a lot of them and they are no longer visible because we cleaned up the commit history for submission (made an orphan branch). They are still visible as closed PRs. In any case I will let @kerkelae describe the authors' contributions.

@kerkelae
Copy link

kerkelae commented Aug 6, 2020

@kerkelae, can you briefly describe the contributions of the coauthors you included? I could only verify your name directly in the commit history, but understand that there are other ways to contribute.

Good point @grlee77. I am the main author and have written most of the code and of the manuscript. Fabio Nery has helped by implementing various algorithms, making decisions on how to package the code, and testing. Matt Hall and Chris Clark have contributed to the software by being very involved in the planning and development phase of much of the code that the final package consists of.

Thank you for the feedback. We will address the other issues you raised very soon.

@DARSakthi
Copy link

Overall the paper is well done. The tutorial section of the documentation is a nice touch, and the mathematical specification in the final section of the paper is also appreciated. Having a colab notebook would be an excellent idea.

I echo the suggestions of @grlee77, in particular verification of the performance claims. I do question the use of the word synthesising in the title -- while 'to synthesise' can mean to produce artificially, a more common usage is to combine. If the authors are partial to the title then no change is necessary, but a better word may clarify the function of the software.

@kerkelae
Copy link

kerkelae commented Aug 12, 2020

@grlee77 and @DARSakthi, thank you for the feedback. We have made a number of changes to the manuscript and the documentation. Please see the points below.

  • We made the small fixes and changes suggested by @grlee77. We also have no particular preference for the word "synthesize" which we replaced by "generate". The updated manuscript contains the changed title but I am not sure how to change the title of my JOSS submission. Could @Kevin-Mattheus-Moerman help us with this please?

  • Please find here a simple script and a ply file that can be used to reproduce Figure 1. Since we didn't find large differences between runtimes of individual simulations, we reported a single set of runtimes instead of averages. Please note that reproducing the figure requires the installation of Java and Camino. Camino is single-threaded and many studies using Camino have run simulations in parallel on clusters.

  • We simplified the installation instructions according to the suggestion by @grlee77 and expanded the tutorial to describe how to run the examples on Google Colaboratory. Special thanks for this tip!

  • We expanded the "Statement of need" section to answer the points 6 and 7 by @grlee77. There exists other methods for simulating dMRI experiments, for example, by solving the Bloch-Torrey equation (which adds the diffusion term to the Bloch equation), but the Monte Carlo method is more generalizable. To our knowledge, Camino is the most popular Monte Carlo simulator but other simulators have been recently developed, some of which are GPU-accelerated. However, we are not aware of approachable Python packages for performing GPU-accelerated dMRI simulations which we find to be of value. We also mentioned the need for relevant microstructural models.

  • Indeed, we do not have continuous integration since the services we are familiar with do not provide support for CUDA. To validate the functionality of the package, we have written unit tests and compared the simulated signals to analytical solutions and other simulation results (genrated by Camino and the example in the tutorial, for instance).

@kerkelae
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Aug 12, 2020

@DARSakthi
Copy link

Is the test script you linked to available in the repo too? I can't seem to find it. If not, I think it could be a good thing to include from an end - user stand point, so as to evaluate whether to use the software or not.

With the most recent round of changes, I am happy to recommend the paper for publication. Well done!

@ritagnunes
Copy link

I really enjoyed being able to run the Tutorial on Google Colaboratory. That was a really nice addition!
It will be very helpful for potential/new users to have a quick grasp of the implemented software features.

Also, the provided examples make it easy to verify the requirement of simulating a high number of random walkers to match the theoretical predictions motivating the need for this software. I also like it that the authors have included a range of microstructural environments, from free to restricted diffusion including multiple compartments and a diffraction example replicating a previous study.

It was harder for me to evaluate the gains in simulation speed as the software and corresponding tutorial do not run without GPU-enabled acceleration. Although I did not run Camino independently, I trust that the simulation times reported in the tests are correct.

The provided documentation is quite comprehensive and the paper clearly motivates the need for the software, describing the implemented features. I also really like that the basic theoretical equations have been included.

I agree with @DARSakthi and I am also happy to recommend the publication of this paper.
Congratulations!

@arfon
Copy link
Member

arfon commented Aug 25, 2020

Thanks @arfon and @Kevin-Mattheus-Moerman. I had changed the title in the paper.md file and just wanted to know if there is something more I need to do because the old title was still shown on the JOSS website.

The version on the JOSS site will update when the paper is published.

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Aug 26, 2020

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon check references

@whedon
Copy link
Author

whedon commented Aug 26, 2020

Reference check summary:

OK DOIs

- 10.1016/j.neuroimage.2020.117107 is OK
- 10.1016/j.jmr.2018.09.013 is OK
- 10.3389/fninf.2020.00008 is OK
- 10.1016/j.neuroimage.2019.116120 is OK
- 10.1007/978-3-319-46630-9_4 is OK
- 10.1007/978-3-642-15745-5_50 is OK
- 10.1016/j.jmr.2015.10.012 is OK
- 10.1016/j.jmr.2012.10.015 is OK
- 10.1063/1.1695690 is OK
- 10.1145/2833157.2833162 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/tmi.2009.2015756 is OK
- 10.1016/j.neuroimage.2007.02.016 is OK
- 10.1002/nbm.3998 is OK
- 10.1101/140459 is OK
- 10.1109/hotchips.2008.7476516 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@Kevin-Mattheus-Moerman
Copy link
Member

Kevin-Mattheus-Moerman commented Aug 26, 2020

@kerkelae I've checked your paper and can confirm it looks in order.

These are final steps to complete:

  • Please archive a copy of the software on ZENODO and report back here with the DOI of the archived version. When you create this archived version be sure to have the title and author meta data match that of the paper (you may need to manually alter this).

  • Can you inform me of the most recent version tag for this work? Is it still at v0.1?

@kerkelae
Copy link

kerkelae commented Aug 26, 2020

@Kevin-Mattheus-Moerman, thank you. Since we changed a few things during the review, I created a new release: v0.1.1. I archived this version on Zenodo. The DOI is http://doi.org/10.5281/zenodo.4001687.

@Kevin-Mattheus-Moerman
Copy link
Member

Kevin-Mattheus-Moerman commented Aug 26, 2020

@kerkelae thanks. However you still need to amend the title and authors of that archive to match the paper title. Let me know when you've completed that.

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon set 10.5281/zenodo.4001687 as archive

@whedon
Copy link
Author

whedon commented Aug 26, 2020

OK. 10.5281/zenodo.4001687 is the archive.

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon set v0.1.1 as version

@whedon
Copy link
Author

whedon commented Aug 26, 2020

OK. v0.1.1 is the version.

@kerkelae
Copy link

@Kevin-Mattheus-Moerman It's fixed now.

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon accept

@whedon whedon added the recommend-accept Papers recommended for acceptance in JOSS. label Aug 26, 2020
@whedon
Copy link
Author

whedon commented Aug 26, 2020

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Aug 26, 2020

Reference check summary:

OK DOIs

- 10.1016/j.neuroimage.2020.117107 is OK
- 10.1016/j.jmr.2018.09.013 is OK
- 10.3389/fninf.2020.00008 is OK
- 10.1016/j.neuroimage.2019.116120 is OK
- 10.1007/978-3-319-46630-9_4 is OK
- 10.1007/978-3-642-15745-5_50 is OK
- 10.1016/j.jmr.2015.10.012 is OK
- 10.1016/j.jmr.2012.10.015 is OK
- 10.1063/1.1695690 is OK
- 10.1145/2833157.2833162 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/tmi.2009.2015756 is OK
- 10.1016/j.neuroimage.2007.02.016 is OK
- 10.1002/nbm.3998 is OK
- 10.1101/140459 is OK
- 10.1109/hotchips.2008.7476516 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Aug 26, 2020

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#1673

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1673, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon accept deposit=true

@whedon whedon added accepted published Papers published in JOSS labels Aug 26, 2020
@whedon
Copy link
Author

whedon commented Aug 26, 2020

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

whedon commented Aug 26, 2020

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Aug 26, 2020

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.02527 joss-papers#1674
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.02527
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@arfon arfon closed this as completed Sep 3, 2020
@whedon
Copy link
Author

whedon commented Sep 3, 2020

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02527/status.svg)](https://doi.org/10.21105/joss.02527)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02527">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02527/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02527/status.svg
   :target: https://doi.org/10.21105/joss.02527

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX
Projects
None yet
Development

No branches or pull requests

8 participants