Skip to content

[REVIEW]: jsPsych: Enabling an Open-Source Collaborative Ecosystem of Behavioral Experiments #5351

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
editorialbot opened this issue Apr 8, 2023 · 60 comments
Assignees
Labels
accepted JavaScript published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review Shell Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences TypeScript

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Apr 8, 2023

Submitting author: @jodeleeuw (Joshua de Leeuw)
Repository: https://github.com/jspsych/jspsych
Branch with paper.md (empty if default branch): joss-paper
Version: v7.3.2
Editor: @oliviaguest
Reviewers: @pmcharrison, @xinyiguan, @chartgerink
Archive: 10.5281/zenodo.7702307

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/66afc874860b052d6d42efe39d17ad7c"><img src="https://joss.theoj.org/papers/66afc874860b052d6d42efe39d17ad7c/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/66afc874860b052d6d42efe39d17ad7c/status.svg)](https://joss.theoj.org/papers/66afc874860b052d6d42efe39d17ad7c)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@pmcharrison & @xinyiguan & @chartgerink, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @oliviaguest know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @xinyiguan

📝 Checklist for @chartgerink

📝 Checklist for @pmcharrison

@editorialbot editorialbot added JavaScript review Shell Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences TypeScript labels Apr 8, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=5.86 s (119.9 files/s, 50071.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
JavaScript                      73           7238          55422         132850
JSON                            68              0              0          32923
TypeScript                     150           4317           2751          25432
Markdown                       218           5864              0          14138
HTML                           180           1130             15           8844
CSS                              3             38             56           1168
SVG                              1              1              1            491
YAML                             5             27             12            296
TeX                              1             30              0            234
Sass                             2             26              0            208
TOML                             1              3              0             14
Bourne Shell                     1              1              0              3
-------------------------------------------------------------------------------
SUM:                           703          18675          58257         216601
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 876

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3758/s13428-014-0458-y is OK
- 10.31234/osf.io/fv65z is OK
- 10.3758/s13428-011-0168-7 is OK
- 10.31234/osf.io/avj92 is OK
- 10.20982/tqmp.17.3.p299 is OK
- 10.3758/s13428-021-01767-3 is OK
- 10.3758/s13428-022-01803-w is OK
- 10.3758/s13428-020-01445-w is OK
- 10.1101/192377 is OK
- 10.1017/S1930297500008512 is OK
- 10.3758/s13428-018-1155-z is OK
- 10.3758/s13428-019-01283-5 is OK
- 10.3758/s13428-019-01237-x is OK
- 10.3758/s13428-022-01899-0 is OK
- 10.3758/s13428-022-01948-8 is OK
- 10.1177/0098628316677643 is OK
- 10.1162/opmi_a_00002 is OK
- 10.3389/fpsyg.2016.00610 is OK
- 10.1590/1516-4446-2020-1675 is OK
- 10.21105/joss.02088 is OK
- 10.3758/s13428-020-01535-9 is OK
- 10.3758/s13428-016-0824-z is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@oliviaguest
Copy link
Member

👋 @jodeleeuw, this is where the review will actually take place. @pmcharrison, @xinyiguan, @chartgerink please use this issue to leave your comments and feedback for the authors (and please read all the instructions above) — however:

  • If you need to comment on the code itself directly (say like making a PR or opening an issue), please link to the PR/issue you create here, so I can get to it from this issue.
  • If you need to give feedback on the paper directly (also feel free to make a PR if need be, like for typos, say), leave the comments here (or link to them), so I can see them from here easily.
  • If you have to comment on broad overarching things, like any feedback generally, just leave the comments here.

Hope this is clear, let me know if not, and thank you for your time! ☺️

@xinyiguan
Copy link

xinyiguan commented Apr 9, 2023

Review checklist for @xinyiguan

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/jspsych/jspsych?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@jodeleeuw) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@jodeleeuw
Copy link

Tagging my coauthors here for ease of access: @becky-gilbert @bjoluc

@openjournals openjournals deleted a comment from editorialbot Apr 11, 2023
@chartgerink
Copy link
Contributor

chartgerink commented Apr 11, 2023

Review checklist for @chartgerink

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/jspsych/jspsych?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@jodeleeuw) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@oliviaguest
Copy link
Member

Tagging my coauthors here for ease of access: @becky-gilbert @bjoluc

@jodeleeuw oh, nice idea. Thank you!

@pmcharrison
Copy link

pmcharrison commented Apr 12, 2023

Review checklist for @pmcharrison

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/jspsych/jspsych?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@jodeleeuw) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@pmcharrison
Copy link

@jodeleeuw looking good! great that jsPsych version 7 will be credited in this way!

Regarding 'state of the field', I wonder if you can add a touch more here. This is the main comparison with previous work:

jsPsych and these other options vary in ways such as available features, closed vs. open source, primary programming language, and syntax/style choices, but the main distinction is the particular way that jsPsych abstracts the design of an experiment. jsPsych experiments are constructed using plugins — self-contained modules that define an event and its parameters.

This is a positive statement about what jsPsych does in terms of plugins, but maybe you can spare a few words to say whether/which other programs have analogous constructs to plugins?

Regarding the references, I just noticed that some of the capitalisation hasn't come through (e.g. r package, pushkin).

@jodeleeuw
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@jodeleeuw
Copy link

Thank you @pmcharrison! I've fixed the reference formatting issues.

We definitely want to correctly characterize the state of field.

I think one issue is getting clear about what we think the distinction is between plugins and other ways of abstracting experiment design. psychTestR has probably the closest analog to plugins, so it would be helpful to get your take on what is an accurate description of any differences that do exist. Here's my current thinking:

  • Some software (lab.js, psychopy, opensesame, gorilla, nodegame, penncontroller, psytoolkit) primarily abstracts the design at a different level, providing a set of resuable components that can be deployed in a different tasks, like image displays, buttons, keyboard control, etc.
  • Some software (empirica) borrows abstraction from more general-purpose tools like React to create resuable components that could exist at different levels. I could create a component for a button, or a component for a whole task. Components can nest.
  • Some software (psychtestr, lookit) abstracts the design in a very similar way to jsPsych, focusing on parameterized events that are arranged in some sequence. jsPsych is slightly different from these options in two ways (and this could of course change quickly!): (1) jsPsych plugins are fully modular, existing as separate files. This is relevant because it facilitates distributed development, where one plugin can be updated independent of a new version of the core library. (2) There is infrastructure in place, especially with version 7, to facilitate distributed development and sharing. These are not significant differences in terms of underlying design, since I think it wouldn't be difficult to create a new task type for psychTestR and publish it as it's own package, and I think the same is true of Lookit. So I don't want to claim that this is a big distinction, but rather just a difference in how things are currently setup to facilitate the kind of collaborative ecosystem that we are emphasizing in this paper.

Does that seem reasonable to you? If so, I'll work on a more concise description for the paper. If not, I'm happy to get feedback!

@pmcharrison
Copy link

Hi @jodeleeuw, many thanks for this interesting overview! It's very helpful to see it laid out in this way.

From you say, I think I agree that the best thing is to emphasise the 'community library' aspect of this. The software mentioned in the first category does contain analogous constructions to plugins under the hood, but it's not so easy to contribute your own unless you're a core developer. You're right that psychTestR in theory allows such contributions, but in practice the vast majority of psychTestR community contributions correspond to entire tests (e.g. a particular IQ test, a particular questionnaire) rather than new response interfaces. In contrast, the jsPsych community library provides an unparalleled source of interfaces.

Maybe all that's needed is something like 'While many psychology experiment frameworks contain abstractions analogous to plugins, it is typically hard or impossible for users to contribute their own plugins. In contrast, we have worked hard to develop a system for jsPsych that makes it easy for users to develop their own plugins and share them with other psychologists via open-source repositories. Our community library already contains X plugins, ranging from Y to Z, which ....' What do you think?

@jodeleeuw
Copy link

@pmcharrison thank you for the very helpful suggestion. We borrowed chunks of it and reworked the statement of need section a bit. Here's the commit so you can see exactly what changed, though I guess it is all one big line in markdown so the actual changes aren't nicely highlighted.

@jodeleeuw
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@pmcharrison
Copy link

@jodeleeuw I love it, great job!

@oliviaguest I'm happy, checklist complete!

@oliviaguest
Copy link
Member

@pmcharrison thank you for the review!

@xinyiguan
Copy link

@jodeleeuw Thank you for this very cool development! I think jsPsych is a valuable contribution to the community, and I particularly appreciate the community-driven and self-contained modular design aspects of the software. The latest version, incorporating feedback from @pmcharrison, looks excellent and has already addressed the points I wanted to raise in my previous review draft.

@oliviaguest I have completed my checklist!

@jodeleeuw
Copy link

@editorialbot generate pdf

@jodeleeuw
Copy link

Looks like it didn't generate for some reason after your last message, so I'll try again.

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@jodeleeuw
Copy link

Everything looks good to me. Thanks @oliviaguest for editing, and @pmcharrison, @xinyiguan, @chartgerink for the time taken to review!

@openjournals openjournals deleted a comment from editorialbot May 9, 2023
@oliviaguest
Copy link
Member

oliviaguest commented May 9, 2023

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

Editor Tasks Prior to Acceptance

  • Read the text of the paper and offer comments/corrections (as either a list or a PR)
  • Check the references in the paper for corrections (e.g. capitalization)
  • Check that the archive title, author list, version tag, and the license are correct
  • Set archive DOI with @editorialbot set <DOI here> as archive
  • Set version with @editorialbot set <version here> as version
  • Double check rendering of paper with @editorialbot generate pdf
  • Specifically check the references with @editorialbot check references and ask author(s) to update as needed
  • Recommend acceptance with @editorialbot recommend-accept

@oliviaguest
Copy link
Member

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3758/s13428-014-0458-y is OK
- 10.31234/osf.io/fv65z is OK
- 10.3758/s13428-011-0168-7 is OK
- 10.31234/osf.io/avj92 is OK
- 10.20982/tqmp.17.3.p299 is OK
- 10.3758/s13428-021-01767-3 is OK
- 10.3758/s13428-022-01803-w is OK
- 10.3758/s13428-020-01445-w is OK
- 10.1101/192377 is OK
- 10.1017/S1930297500008512 is OK
- 10.3758/s13428-018-1155-z is OK
- 10.3758/s13428-019-01283-5 is OK
- 10.3758/s13428-019-01237-x is OK
- 10.3758/s13428-022-01899-0 is OK
- 10.3758/s13428-022-01948-8 is OK
- 10.1177/0098628316677643 is OK
- 10.1162/opmi_a_00002 is OK
- 10.3389/fpsyg.2016.00610 is OK
- 10.1590/1516-4446-2020-1675 is OK
- 10.21105/joss.02088 is OK
- 10.3758/s13428-020-01535-9 is OK
- 10.3758/s13428-016-0824-z is OK

MISSING DOIs

- None

INVALID DOIs

- None

@oliviaguest
Copy link
Member

@jodeleeuw can you make the title of the zenodo archive the same as the paper? And we're done! 🥳

@jodeleeuw
Copy link

All set!

@oliviaguest
Copy link
Member

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.3758/s13428-014-0458-y is OK
- 10.31234/osf.io/fv65z is OK
- 10.3758/s13428-011-0168-7 is OK
- 10.31234/osf.io/avj92 is OK
- 10.20982/tqmp.17.3.p299 is OK
- 10.3758/s13428-021-01767-3 is OK
- 10.3758/s13428-022-01803-w is OK
- 10.3758/s13428-020-01445-w is OK
- 10.1101/192377 is OK
- 10.1017/S1930297500008512 is OK
- 10.3758/s13428-018-1155-z is OK
- 10.3758/s13428-019-01283-5 is OK
- 10.3758/s13428-019-01237-x is OK
- 10.3758/s13428-022-01899-0 is OK
- 10.3758/s13428-022-01948-8 is OK
- 10.1177/0098628316677643 is OK
- 10.1162/opmi_a_00002 is OK
- 10.3389/fpsyg.2016.00610 is OK
- 10.1590/1516-4446-2020-1675 is OK
- 10.21105/joss.02088 is OK
- 10.3758/s13428-020-01535-9 is OK
- 10.3758/s13428-016-0824-z is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/sbcs-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4218, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label May 11, 2023
@oliviaguest
Copy link
Member

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Leeuw
  given-names: Joshua R.
  name-particle: de
  orcid: "https://orcid.org/0000-0003-4815-2364"
- family-names: Gilbert
  given-names: Rebecca A.
  orcid: "https://orcid.org/0000-0003-4574-7792"
- family-names: Luchterhandt
  given-names: Björn
  orcid: "https://orcid.org/0000-0002-9225-2787"
contact:
- family-names: Leeuw
  given-names: Joshua R.
  name-particle: de
  orcid: "https://orcid.org/0000-0003-4815-2364"
doi: 10.5281/zenodo.7702307
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Leeuw
    given-names: Joshua R.
    name-particle: de
    orcid: "https://orcid.org/0000-0003-4815-2364"
  - family-names: Gilbert
    given-names: Rebecca A.
    orcid: "https://orcid.org/0000-0003-4574-7792"
  - family-names: Luchterhandt
    given-names: Björn
    orcid: "https://orcid.org/0000-0002-9225-2787"
  date-published: 2023-05-11
  doi: 10.21105/joss.05351
  issn: 2475-9066
  issue: 85
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5351
  title: "jsPsych: Enabling an Open-Source Collaborative Ecosystem of
    Behavioral Experiments"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05351"
  volume: 8
title: "jsPsych: Enabling an Open-Source Collaborative Ecosystem of
  Behavioral Experiments"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05351 joss-papers#4219
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05351
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels May 11, 2023
@xinyiguan
Copy link

Congrats! @jodeleeuw

@oliviaguest
Copy link
Member

Sorry for going AWOL, I had a super ad migraine.

Big thank you to the reviewers @pmcharrison, @xinyiguan, @chartgerink! And congratulations @jodeleeuw! 🥳

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05351/status.svg)](https://doi.org/10.21105/joss.05351)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05351">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05351/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05351/status.svg
   :target: https://doi.org/10.21105/joss.05351

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted JavaScript published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review Shell Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences TypeScript
Projects
None yet
Development

No branches or pull requests

6 participants