Skip to content

[REVIEW]: Φ-ML: A Science-oriented Math and Neural Network Library for Jax, PyTorch, TensorFlow & NumPy #6171

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
editorialbot opened this issue Dec 22, 2023 · 98 comments
Assignees
Labels
accepted C++ published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Dec 22, 2023

Submitting author: @holl- (Philipp Holl)
Repository: https://github.com/tum-pbs/PhiML
Branch with paper.md (empty if default branch): joss-submission
Version: 1.4.0
Editor: @mstimberg
Reviewers: @wandeln, @chaoming0625, @gauravbokil8
Archive: 10.6084/m9.figshare.25282300

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/0d2c475d01add330399e6d6bdb7a78d4"><img src="https://joss.theoj.org/papers/0d2c475d01add330399e6d6bdb7a78d4/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/0d2c475d01add330399e6d6bdb7a78d4/status.svg)](https://joss.theoj.org/papers/0d2c475d01add330399e6d6bdb7a78d4)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@wandeln & @chaoming0625, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mstimberg know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @chaoming0625

📝 Checklist for @gauravbokil8

📝 Checklist for @wandeln

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.28 s (356.7 files/s, 150895.3 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          62           4580           5861          18800
Jupyter Notebook                15              0           2534           6361
C++                             10            345            312           1891
Markdown                         8            234              0            562
C/C++ Header                     1             65             22            346
TeX                              1             37              0            253
YAML                             3             17              4             81
-------------------------------------------------------------------------------
SUM:                           100           5278           8733          28294
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 2569

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3390/atmos13020180 is OK
- 10.1146/annurev-fluid-010719-060214 is OK
- 10.3847/1538-4357/abb9a7 is OK
- 10.1111/j.1365-2966.2004.07442.x is OK
- 10.1088/0004-637X/803/2/50 is OK
- 10.1016/j.oregeorev.2015.01.001 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.1007/978-1-4302-2758-8_14 is OK
- 10.1038/s41586-020-2649-2 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@mstimberg
Copy link

@editorialbot add @gauravbokil8 as reviewer

@editorialbot
Copy link
Collaborator Author

@gauravbokil8 added to the reviewers list!

@mstimberg
Copy link

👋🏼 @holl- @wandeln @chaoming0625 @gauravbokil8, this is the review thread for the paper. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist

as the top of a new comment in this thread.

These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#6171 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

Please feel free to ping me (@mstimberg) if you have any questions/concerns.

@gauravbokil8
Copy link

gauravbokil8 commented Jan 3, 2024

Review checklist for @gauravbokil8

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/tum-pbs/PhiML?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@holl-) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@chaoming0625
Copy link

chaoming0625 commented Jan 4, 2024

Review checklist for @chaoming0625

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/tum-pbs/PhiML?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@holl-) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@wandeln
Copy link

wandeln commented Jan 5, 2024

Review checklist for @wandeln

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/tum-pbs/PhiML?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@holl-) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@mstimberg
Copy link

👋 Hi everyone, and a Happy New Year ✨ !

@wandeln and @chaoming0625 I see that you checked off most/all the boxes in the checklist. Are you still planning on commenting on any specifics and/or suggest any changes to the software or paper? Or would you recommend it for acceptance in JOSS in its current state? Thanks 🙏

@gauravbokil8
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@gauravbokil8
Copy link

Hi @mstimberg . I recommend the paper and code for acceptance in its current state. The code functionalities and corresponding documentation are complete and function as expected. The examples are helpful as well.

@chaoming0625
Copy link

chaoming0625 commented Jan 14, 2024

Sorry, @mstimberg , I have been delayed by some other matters, and I will push forward with the review as soon as possible.

@chaoming0625
Copy link

Dear Philipp,

This is an excellent work that introduces a systematic array/tensor operation paradigm based on explicitly named dimensions. However, I have some questions:

  1. The related work section could be expanded. Assigning names or labels to tensor dimensions to increase readability and reliability has been explored in various libraries like xarray (2017), labeled_tensor in TensorFlow, namedtensor in PyTorch, and recently einops package.
  2. The limitations of this approach should also be discussed, such as the cognitive load for users to remember many axis labels, potential speed impacts of axis ordering (also see the minor comments), errors from automatic reshaping if axis mistakes are not caught, and whether this paradigm can be widely applied in physical sciences or only a subset where explicit labels are common. Named tensors have been proposed for years with little adoption in ML - is physical science a more fitting domain?
  3. According to the paper's statements, it seems only support custom CUDA operations in TensorFlow. Are other frameworks like PyTorch supported?
  4. The paper title may be too broad given the limitations of named tensors. Science covers an extremely wide range of domains, while this approach may not be applicable or have adoption issues in many areas. Perhaps the application domain in the title could be narrowed down.

Minor comments:

  1. There appear to be some bugs or compatibility issues in the framework - see the documentation at https://tum-pbs.github.io/PhiML/Linear_Solves.html.
  2. Performance compared to raw PyTorch/JAX should be benchmarked. The paper claims execution speed is guaranteed, but no data demonstrates Python overhead with these wrappers vs. PyTorch's impressive native performance.

Finally, thank you for the submission. I hope these suggestions are helpful for improving the paper. Please let me know if you would like me to clarify or expand on any point.

Best,

Chaoming Wang

@holl-
Copy link

holl- commented Jan 16, 2024

Hi @chaoming0625,

Thanks for your great comments! I've added the proposed references (except for labeled_tensor which doesn't seem to exist anymore) and fixed the error in the documentation.

Custom CUDA kernels are currently only available for TensorFlow. However, these are optional and only increase performance for certain operations which are not natively implemented in TensorFlow. We have considered adding custom PyTorch operations, but this would only lead to minor gains in performance.
We found that a much larger gain in performance can be achieved by switching to Jax with JIT. This level of performance is hard to beat, even with custom CUDA kernels.

By science-oriented, we don't mean to say that all of science should use this library. Instead, PhiML focusses scientific applications rather than pure machine learning. For applications that focus more on network architectures and can work with supervised learning, there is little benefit to using PhiML. This library shines when it comes to differentiable simulations, optimization with analytic derivatives, and integrating machine learning tools into scientific applications.
PhiML covers a similar range as NumPy/SciPy, mapping a large part of their APIs. The core features, such as named dimensions and efficient (sparse) linear solves, can be applied on a wide variety of scientific applications.

Automatic reshaping does have an impact on performance, but only when running the Python code as-is. Once JIT-compiled, these overheads are fully optimized out. As for performance benchmarks, I'm planning to write a simple simulation in PhiML and the same in PyTorch/TensorFlow/Jax natively to compare the runtime with JIT-compilation. Is this what you meant?

Unlike with related libraries, our named dimension implementation is only one part of the puzzle. The arguably bigger impact is achieved by dimension types. In fact, all library functions are agnostic to the dimension names and select the dimensions to operate on based on their types. The same is recommended for all user-written library/helper functions. Dimension names should only be assigned at the highest level by the user, and should ideally be fully contained in one file. Remembering a handful of dimension names seems no more difficult than remembering the dimension order in another library, something that is not necessary in PhiML.

We have applied this paradigm to many different kinds of tasks, from custom neural network libraries to PDEs to rigid body simulations to curve fitting problems. Despite there being only a handful of dimension types, the system has scaled remarkably well to all settings. Dimension names are especially useful in scientific tasks as a wider range of tensor functions is used there compared to pure machine learning applications.
I'll add an example to show how dimension names prevent errors by only allowing correct / sensible operations.

Best,
Philipp

@holl-
Copy link

holl- commented Jan 17, 2024

@chaoming0625 I've uploaded a preliminary performance comparison notebook, comparing Φ-ML vs native performance.

@holl-
Copy link

holl- commented Jan 17, 2024

@chaoming0625 I've also uploaded a preliminary notebook showcasing the advantages of named and typed dimensions. I was again surprised by how difficult many things are in PyTorch.

@chaoming0625
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@chaoming0625
Copy link

Dear @holl- ,

Thank you for addressing most of my concerns. One additional thought is still on the science-oriented aspect.

Based on my understanding, numerical computing frameworks like NumPy, PyTorch, and JAX have demonstrated significant impact in scientific computing (Nature 585, 357–362, 2020; Nature Methods, 17, 261–272). These frameworks are specifically designed for scientific purposes and have become ubiquitous across scientific disciplines.

Considering this, it's debatable to claim $\Phi_\textrm{ML}$ is uniquely "science-oriented" relative to previous solutions. The current phrasing implies past frameworks were not designed for science, which is inaccurate. Perhaps it would be better to highlight $\Phi_\textrm{ML}$'s specific features that make it well-suited and user-friendly for scientists, rather than positioning it as the first science-oriented framework, given the current paper title.

Best,

Chaoming

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41573-019-0024-5 is OK
- 10.1038/s41586-021-03819-2 is OK
- 10.3390/atmos13020180 is OK
- 10.1146/annurev-fluid-010719-060214 is OK
- 10.1002/inf2.12028 is OK
- 10.3847/1538-4357/abb9a7 is OK
- 10.1111/j.1365-2966.2004.07442.x is OK
- 10.1088/0004-637X/803/2/50 is OK
- 10.1016/j.oregeorev.2015.01.001 is OK
- 10.1038/s41586-018-0337-2 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.1007/978-1-4302-2758-8_14 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.5334/jors.148 is OK
- 10.1063/5.0047428 is OK
- 10.1145/3490035.3490283 is OK
- 10.1152/ajprenal.1993.264.4.F629 is OK
- 10.48550/arXiv.2206.00342 is OK
- 10.1117/12.2656026 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

⚠️ Error preparing paper acceptance. The generated XML metadata file is invalid.

Element inline-formula is not declared in ext-link list of possible children

@mstimberg
Copy link

mstimberg commented Feb 29, 2024

@holl- Unfortunately there is an error in the document, not the PDF, but the JATS file that is used for archiving the paper. The reason is the link to $\Phi_\textrm{Flow}$ as [$\Phi_\textrm{Flow}$](https://github.com/tum-pbs/PhiFlow). In JATS, a link description cannot contain an "inline formula", i.e. something between $…$.

I'll leave it up to you what alternative you prefer, it could be e.g.

[PhiFlow](https://github.com/tum-pbs/PhiFlow) ($\Phi_\textrm{Flow}$)

rendered as

PhiFlow ($\Phi_\textrm{Flow}$)

or

$\Phi_\textrm{Flow}$ (https://github.com/tum-pbs/PhiFlow)

rendered as

$\Phi_\textrm{Flow}$ (https://github.com/tum-pbs/PhiFlow)

@holl-
Copy link

holl- commented Feb 29, 2024

@mstimberg Okay, I replaced the link by the first option and updated the branch.

@openjournals openjournals deleted a comment from editorialbot Feb 29, 2024
@mstimberg
Copy link

@editorialbot generate pdf

@mstimberg
Copy link

@editorialbot recommend-accept

@mstimberg Okay, I replaced the link by the first option and updated the branch.

Thanks a lot, trying again.

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5072, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Feb 29, 2024
@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/s41573-019-0024-5 is OK
- 10.1038/s41586-021-03819-2 is OK
- 10.3390/atmos13020180 is OK
- 10.1146/annurev-fluid-010719-060214 is OK
- 10.1002/inf2.12028 is OK
- 10.3847/1538-4357/abb9a7 is OK
- 10.1111/j.1365-2966.2004.07442.x is OK
- 10.1088/0004-637X/803/2/50 is OK
- 10.1016/j.oregeorev.2015.01.001 is OK
- 10.1038/s41586-018-0337-2 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.1007/978-1-4302-2758-8_14 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.5334/jors.148 is OK
- 10.1063/5.0047428 is OK
- 10.1145/3490035.3490283 is OK
- 10.1152/ajprenal.1993.264.4.F629 is OK
- 10.48550/arXiv.2206.00342 is OK
- 10.1117/12.2656026 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@arfon
Copy link
Member

arfon commented Mar 1, 2024

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Holl
  given-names: Philipp
  orcid: "https://orcid.org/0000-0001-9246-5195"
- family-names: Thuerey
  given-names: Nils
  orcid: "https://orcid.org/0000-0001-6647-8910"
doi: 10.6084/m9.figshare.25282300
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Holl
    given-names: Philipp
    orcid: "https://orcid.org/0000-0001-9246-5195"
  - family-names: Thuerey
    given-names: Nils
    orcid: "https://orcid.org/0000-0001-6647-8910"
  date-published: 2024-03-01
  doi: 10.21105/joss.06171
  issn: 2475-9066
  issue: 95
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 6171
  title: "\\\\Phi\\_\\\\textrm{ML}: Intuitive Scientific Computing with
    Dimension Types for Jax, PyTorch, TensorFlow & NumPy"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.06171"
  volume: 9
title: "$\\Phi_\\textrm{ML}$: Intuitive Scientific Computing with
  Dimension Types for Jax, PyTorch, TensorFlow & NumPy"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.06171 joss-papers#5080
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.06171
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Mar 1, 2024
@arfon
Copy link
Member

arfon commented Mar 1, 2024

@holl- the JOSS website (and citation metadata) can't handle TeX. What would a plain-text version of this be? e.g., Phi_ML or something like that: https://joss.theoj.org/papers/10.21105/joss.06171

@mstimberg
Copy link

Ah, that's unfortunate. If Unicode is fine, Φ_ML would be another option, I guess.

@holl-
Copy link

holl- commented Mar 1, 2024

@mstimberg @arfon Φ-ML would be my preference if possible but Φ_ML also works.

@arfon
Copy link
Member

arfon commented Mar 3, 2024

OK, Φ-ML looks to work.

@wandeln, @chaoming0625, @gauravbokil8 – many thanks for your reviews here and to @mstimberg for editing this submission! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@holl- – your paper is now accepted and published in JOSS ⚡🚀💥

@arfon arfon closed this as completed Mar 3, 2024
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.06171/status.svg)](https://doi.org/10.21105/joss.06171)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.06171">
  <img src="https://joss.theoj.org/papers/10.21105/joss.06171/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.06171/status.svg
   :target: https://doi.org/10.21105/joss.06171

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@holl-
Copy link

holl- commented Mar 3, 2024

@mstimberg @arfon @wandeln @chaoming0625 @gauravbokil8 Thank you all for your support 🎉🎉🎉

@gauravbokil8
Copy link

@holl- Good luck

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted C++ published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

7 participants