Skip to content

[REVIEW]: NiBetaSeries: task related correlations in fMRI #1295

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
18 tasks done
whedon opened this issue Mar 4, 2019 · 104 comments
Closed
18 tasks done

[REVIEW]: NiBetaSeries: task related correlations in fMRI #1295

whedon opened this issue Mar 4, 2019 · 104 comments
Assignees
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented Mar 4, 2019

Submitting author: @jdkent (James Kent)
Repository: https://github.com/HBClab/NiBetaSeries
Version: v0.3.2
Editor: @arokem
Reviewer: @snastase
Archive: 10.5281/zenodo.3385339

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/a0d2ec4e06309e9b1e21dc302c396290"><img src="http://joss.theoj.org/papers/a0d2ec4e06309e9b1e21dc302c396290/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/a0d2ec4e06309e9b1e21dc302c396290/status.svg)](http://joss.theoj.org/papers/a0d2ec4e06309e9b1e21dc302c396290)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@snastase, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @arokem know.

Please try and complete your review in the next two weeks

Review checklist for @snastase

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: v0.3.2
  • Authorship: Has the submitting author (@jdkent) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon
Copy link
Author

whedon commented Mar 4, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @snastase it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

@whedon
Copy link
Author

whedon commented Mar 4, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Mar 4, 2019

@arokem
Copy link

arokem commented Mar 25, 2019

@snastase : have you had a chance to take a look?

@snastase
Copy link

@arokem @jdkent Sorry for losing track of this! Working through it now...

@snastase
Copy link

Okay, I think I have a grip on this now—nice contribution and great to see things like this wrapped up into BIDS Apps! To summarize, this project aims to compute beta-series correlations on BIDS-compliant data preprocessed using fMRIPrep. I have some general comments and then a laundry list of smaller-scale suggestions. For the really nit-picky stuff, I can make a PR if you don't mind. Also, bear in mind that I'm more neuroscientist than software developer per se, so apologies if any of these comments are way off the mark!

General comments:
First of all, the PDF and introductory documentation (betaseries.html) could be made a little more clear and concise. For example, it wasn't immediately obvious to me what the actual output of the tool is... can I get the actual series of estimated betas? Or only the inter-ROI beta-series correlation matrices? It might be nice to simply get the beta series and forget the correlations (e.g., for a downstream MVPA analysis). Is it absolutely necessary to provide an atlas to define ROIs? If I don't provide an atlas, will it compute beta-series correlations between all voxels (computationally intensive). Basically what I'm trying to say here is that it wasn't obvious what to expect in terms of input–output (namely output), what moving parts are necessary, and how much flexibility there is. I could figure these things out by trial and error, but it seems useful to lay this out a bit more explicitly in the documentation.

I'm a little bit unsure about the Jupyter Notebook-style tutorial walkthrough in the "How to run" documentation. If I was planning to run this, I'd likely be running it from the Linux command line (maybe via a scheduler like Slurm on my server)—not invoking it via Python's subprocess. You jump through a bunch of hoops with Python just to download and modify the example data and only one cell of the tutorial actually runs nibs. I think this material is useful, particularly for users to see how to modify idiosyncratic OpenNeuro datasets, but I'm not sure there's enough focus on the nibs invocation. An alternative approach would be to upload the minimal dataset with all necessary modifications to figshare or something and download that in the tutorial. This would avoid spending so much of the tutorial on data manipulation and spend a few more cells describing the nibs command-line invocation and various options.

This brings up the point that, if the preprocessed BIDS derivatives (e.g., in *_events.tsv) are fairly standard, should we expect nibs be able to handle them internally? For example, you manually rename some columns and reassign "Yes/No" values to 1 and 0 to satisfy assumptions. Another approach to this would be to build in some optional arguments in the nibs CLI that allows the user to specify column names and acceptable value names (and map to numerical values if need be). For example, when I run nibs, I might have a command-line argument specifying that conditions is the column name indicating trial types and that it should have three possible conditions values (neutral, congruent, and incongruent), and another argument that specifying the correct column and mapping {'Yes': 1, 'No': 0}. I'm not necessarily saying this should be the way things are, just offering this an alternative approach. I'm genuinely curious if this would be feasible and if it's better or worse in terms of software design.

Specific comments and questions:

  • Would it be worth making a Singularity image for this? For example, I'm almost exclusively running fMRIPRep and MRIQC apps on a server via Singularity because I don't have installation privileges. I suppose the alternative is indicating to users to pip install nibetaseries in a conda environment or something along these lines.

  • Speaking of installation, if you're degenerate like me and still have a Python 2.7 installation on your machine, pip install nibetaseries will try to install this in 2.7 and wreck. Something like pip3 install nibetaseries or python3 -m pip install nibetaseries works more reliably. I would slightly expand the installation documentation page, specifying Python version, etc.

  • It might be worth pointing users to Binder (https://mybinder.org) for running the tutorial Jupyter Notebook interactively.

  • I understood that based on the atlas provided, beta series are computed per voxel then averaged across voxels within each ROI? As opposed to averaging time series across voxels then computing the beta series? Is there a reason (or reference) for taking this approach?

  • What are the recommendations for high-/low-pass filtering? Is there a precedent in the literature for any recommended values? In fact, the documentation mentions both low- and high-pass filtering, but I only see an option for supplying a low-pass filter in the usage documentation.

  • Some of the multiword command-line arguments use "_" and some use "-" ...I would just use underscore in e.g. --atlas-img for consistency with other arguments (e.g., --session_label, --hrf_model)

  • Is this backward-compatible with older-style BIDS derivatives from fMRIPrep? E.g., files with and without the "desc-".

  • One issue I've encountered with people running apps like fMRIPrep is confusion about the "work" directory; namely, whether it can be safely deleted, whether it should be deleted if re-running from scratch, etc. Would be good to make a note of this in the documentation.

  • In the documentation and PDF, I would make it a little more explicit that the "beta" is a colloquial term for the parameter estimates (or regression coefficients) in a GLM.

  • It should be made abundantly clear in the documentation that this is running the "LSS" version of the analysis. Are there future plans to allow for optionally running the "LSA" version?

  • The “How to run NiBetaSeries” section of the documentation unpacks strangely and doesn’t allow user to scroll down through headings; at first I thought the download links were the only thing there. Clicking on the subheadings in the table of contents, however, brings you to a separate page with the walkthrough. Is there a way to combine these such that the download links simply appear at the top of the same page as the walkthrough?

  • Under the "References" heading in the betaseries.html documentation, I would include the full reference text and DOI links.

  • Cite the paper for the OpenNeuro dataset you use in the tutorial documentation:
    Verstynen, T. D. (2014). The organization and dynamics of corticostriatal pathways link the medial orbitofrontal cortex to future behavioral responses. Journal of Neurophysiology, 112(10), 2457–2469. https://doi.org/10.1152/jn.00221.2014

  • I would cite Abdulrahman & Henson (2015) in the PDF.

  • I would also cite the BIDS Apps paper in the PDF as this is a BIDS App:
    Gorgolewski, K. J., Alfaro-Almagro, F., Auer, T., Bellec, P., Capotă, M., Chakravarty, M. M., ... & Poldrack, R. A. (2017). BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLOS Computational Biology, 13(3), e1005209. https://doi.org/10.1371/journal.pcbi.1005209

  • In the PDF, update the fMRIPrep reference by Esteban et al. to the Nature Methods version:
    Esteban, O., Markiewicz, C., Blair, R. W., Moodie, C., Isik, A. I., Erramuzpe, A., Kent, J. D., Goncalves, M., DuPre, E., Snyder, M., Oya, H., Ghosh, S., Wright, J., Durnez, J., Poldrack, R., & Gorgolewski, K. J. (2019). FMRIPrep: a robust preprocessing pipeline for functional MRI. Nature Methods, 16, 111–116.

  • Code coverage is only 70%... might be worth trying to increase this.

@jdkent
Copy link

jdkent commented Apr 1, 2019

Thank you so much for the in-depth review @snastase! I will be working on addressing these comments via issues/pull requests this week.

@danielskatz
Copy link

👋 @jdkent - What's going on with this submission? Are you working on the comments? Or maybe you have worked on them already, and just need to tell us here?

@jdkent
Copy link

jdkent commented May 21, 2019

Hi @danielskatz, I am working through the comments still. I should have more time to dedicate to the project this week and the next.

@danielskatz
Copy link

Thanks for the update.

@labarba
Copy link
Member

labarba commented Jun 8, 2019

@jdkent — Can you give us an update? If you're not close to done, please let me know of a time period to set a reminder for you.

@kyleniemeyer
Copy link

Hi @jdkent, sorry for the repeated bugging, but any updates?

@jdkent
Copy link

jdkent commented Jun 26, 2019

Hi Sorry, I was at a conference and I missed responding to the last comment. This is high in my queue, I'm putting in a good faith effort to finish by the end of this week.

@kyleniemeyer
Copy link

@whedon remind @jdkent in 1 week

@whedon
Copy link
Author

whedon commented Jun 26, 2019

Reminder set for @jdkent in 1 week

@whedon
Copy link
Author

whedon commented Jul 3, 2019

👋 @jdkent, please update us on how things are progressing here.

@jdkent
Copy link

jdkent commented Jul 3, 2019

Update:
I have at least touched on every issue (except adding a binder link), which I am currently deriving a solution for. Thank you for the comments @snastase, I think the documentation is much improved now. @snastase, could you take another look at the joss issues to see if I adequately covered your comments?

the most up to date documentation should be published on RTD

@snastase
Copy link

snastase commented Jul 9, 2019

@jdkent sorry, I'm trying to get to this! Very backed up right now. Hopefully by this weekend.

@arokem
Copy link

arokem commented Jul 17, 2019

Hey @snastase : have you had a chance to take another look?

@snastase
Copy link

snastase commented Jul 26, 2019

@jdkent sorry the delay—finally got to work back through this. I think this is greatly improved. I went through the documentation and made a PR with minor edits.

I'm satisfied with this but am providing a few additional comments:

  1. You might include an equation in the math documentation for LSS to go along with your python pseudocode; e.g.:
Y = X_1\beta_1 + X_n\ne1\beta_n\ne1 + \epsilon,
Y = X_2\beta_2 + X_n\ne2\beta_n\ne2 + \epsilon, ...
  1. Important not to conflate "resting-state", which is a task paradigm, with the correlation-based functional connectivity analyses typically run on resting-state data. I tried to make this a bit clearer in the documentation.

  2. I confirmed that tutorial runs on Binder. For whatever reason, the link to Binder in on the Tutorials documentation points a kind of off-putting 404 error until the Binder session initializes. (Note also that the References are not rendering properly at the bottom of the Jupyter notebook).

  3. Do you really want to introduce Generalized Linear Models in the PDF? Not sure this (and the discussion of e.g. non-normality) is really representative of how GLMs are typically used in fMRI. I would also not say the "peak of the gamma curve is determined by the beta coefficient".

  4. I would probably the de-emphasize the importance of the "atlas parcellation" in the PDF. Using an atlas to compute inter-areal correlations is only one particular strategy. For example, you could also compute beta series per voxel and then use ICA to identify "networks" rather than using an a prior parcellation.

  5. In general, I would do a final pass through the PDF and make sure it's concise as possible. For example, I would take out unrelated bits like "voxel is shortened form of volumetric pixel" to make the PDF as short and direct as possible.

  6. I'm curious why you would compute beta series per voxel then average them within each ROI rather than the much less computationally-intensive approach of averaging the time series in each ROI then computing beta series—but this is not a critical issue.

@arokem
Copy link

arokem commented Sep 18, 2019

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 18, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Sep 18, 2019

@arokem
Copy link

arokem commented Sep 18, 2019

Oh - of course I don't see these changes in the PDF. They're on a separate branch.

@arokem
Copy link

arokem commented Sep 18, 2019

Looks alright to me.

@jdkent
Copy link

jdkent commented Sep 18, 2019

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 18, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Sep 18, 2019

@jdkent
Copy link

jdkent commented Sep 18, 2019

@Kevin-Mattheus-Moerman

From this commit:

  • I changed a couple of the sentences to avoid awkward phrasing and the sentence missing "be".
  • I added city/country to affiliations and I made my affiliation more explicit
  • I added and verified links to funding sources.

Let me know if I've covered all your suggestions adequately.

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon check references

@whedon
Copy link
Author

whedon commented Sep 19, 2019

Attempting to check references...

@whedon
Copy link
Author

whedon commented Sep 19, 2019


OK DOIs

- 10.3389/fnsys.2015.00126 is OK
- 10.1038/s41592-018-0235-4 is OK
- 10.1016/j.neuroimage.2004.06.035 is OK
- 10.1016/j.neuroimage.2011.08.076 is OK
- 10.1016/j.neuroimage.2015.11.009 is OK
- 10.1016/j.neuroimage.2012.05.057 is OK
- 10.1371/journal.pcbi.1005209 is OK
- 10.3389/fnsys.2010.00008 is OK
- 10.1371/journal.pone.0013701 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@Kevin-Mattheus-Moerman
Copy link
Member

@jdkent thanks, this looks good now.

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon accept

@whedon
Copy link
Author

whedon commented Sep 19, 2019

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Sep 19, 2019

Check final proof 👉 openjournals/joss-papers#966

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#966, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@Kevin-Mattheus-Moerman
Copy link
Member

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented Sep 19, 2019

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

whedon commented Sep 19, 2019

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Sep 19, 2019

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.01295 joss-papers#967
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01295
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@Kevin-Mattheus-Moerman
Copy link
Member

@jdkent congratulations on your publication in JOSS. Thank you @arokem for editing this submission and thank you @snastase for reviewing this work. 🎉

@whedon
Copy link
Author

whedon commented Sep 19, 2019

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01295/status.svg)](https://doi.org/10.21105/joss.01295)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01295">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01295/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01295/status.svg
   :target: https://doi.org/10.21105/joss.01295

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@jdkent
Copy link

jdkent commented Sep 19, 2019

@snastase - thanks again for your review!

@labarba & @Kevin-Mattheus-Moerman - thank you for helping making the text clearer and answering my questions!

@arokem - thank you for your guidance during this process!

This was a great experience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

9 participants