Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Hardware-Control: Instrument Control and Automation Package #2688

Closed
49 of 60 tasks
whedon opened this issue Sep 21, 2020 · 128 comments
Closed
49 of 60 tasks

[REVIEW]: Hardware-Control: Instrument Control and Automation Package #2688

whedon opened this issue Sep 21, 2020 · 128 comments
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX

Comments

@whedon
Copy link

whedon commented Sep 21, 2020

Submitting author: @Grant-Giesbrecht (Grant Giesbrecht)
Repository: https://bitbucket.org/berkeleylab/hardware-control/src/main/
Branch with paper.md (empty if default branch):
Version: 2.1.0
Editor: @timtroendle
Reviewers: @aquilesC, @untzag, @garrettj403
Archive: 10.5281/zenodo.6459291

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/e12998080a7839634b5852f633eb0ec6"><img src="https://joss.theoj.org/papers/e12998080a7839634b5852f633eb0ec6/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/e12998080a7839634b5852f633eb0ec6/status.svg)](https://joss.theoj.org/papers/e12998080a7839634b5852f633eb0ec6)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@aquilesC & @untzag & @garrettj403, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @timtroendle know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @aquilesC

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@Grant-Giesbrecht) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @untzag

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@Grant-Giesbrecht) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @garrettj403

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@Grant-Giesbrecht) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon
Copy link
Author

whedon commented Sep 21, 2020

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @aquilesC, @untzag, @garrettj403 it looks like you're currently assigned to review this paper 🎉.

⚠️ JOSS reduced service mode ⚠️

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 21, 2020

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- None

MISSING DOIs

- None

INVALID DOIs

- None

@whedon
Copy link
Author

whedon commented Sep 21, 2020

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@aquilesC
Copy link

aquilesC commented Sep 21, 2020

@timtroendle, I did a quick check of the code and there are few things to check/improve, which means the package is not yet ready to be published. I also noticed that the latest commit to the master branch was only 11 hours ago, so the code is still evolving and this means it is a moving target for reviewing. I am not sure how should I proceed?

@timtroendle
Copy link

Thanks @aquilesC for pointing this out. @Grant-Giesbrecht, can you please make sure there is a stable version of the package which can be reviewed? You stated the version for this submission to be "v1.0.0", but this version does not exist in the repo. Can you please add it?

You can of course continue the development, but these changes should remain outside the scope of this submission.

@arunpersaud
Copy link

Hello, I'm one of the co-authors. We do have a 1.0.0 released and that should be the version to be reviewed. The released version can be found on pypi https://pypi.org/project/hardware-control/. I noticed that we forgot to push the tag to bitbucket, I just fixed that... thanks for pointing this out.

@aquilesC
Copy link

Dear @arunpersaud , thanks for clarifying!

So I looked at the v1.0.0. First, I would like to acknowledge your hard work on a topic that is normally ill defined, such as instrumentation with Python. I liked some of your approaches, such as including a terminal within the UI to gain extra control of the devices without leaving it. I also appreciate the effort for defining reusable user interfaces. I, personally, don't like the idea of defining your own commands to control instruments, but I understand why you do it. However, this requires a lot of good documentation and justification, which is currently lacking.

As reviewers, we have to follow a checklist to justify our decision (you can see them above), and sadly I do not consider this work to be ready to be published. However, I took the time to make an extensive justification of the things that can be improved, perhaps for a future version 1.1.0. Some of the topics are structural, and therefore I don't think it is wise to open individual issues for each one of my thoughts. (Disclaimer: I have been working on and teaching Python for Instrumentation for almost 10 years now, so I may put the bar a bit higher and use this review as a teaching and learning opportunity as well, so don't take the remarks personally but judge them rationally and try to see what things make sense and can be improved). There are other reviewers and an editor involved, and our opinions may be different.

I do think your program has potential, and it helps boost the image of Python as a valid alternative in the particle physics experimental community.

Paper

  • Typos:
    • Task, such as data logging or parameter scans (should be Tasks)
    • often needs (should be need)
    • Especially the control through python during execution make (should be makes)
    • there are others typos, also throughout the documentation.
  • "uses Qt, a GUI framework" (Riverbank Computing Limited,2020)
    • Qt is not from Riverbank Computer, PyQt is.
  • LabView (...) they often do provide a wide range of instrument drivers.
    This is spiritually not correct. Hardware manufacturers provide drivers written for (or in) LabView.
  • It could have been useful to state what is different about this program that is not found in others (also Python, Matlab, or Java-based, etc.)
  • "because it solves the broadly applicable problem of instrument automation"
    This is an oversell of the package. It does not solve the problem, it only shows one way of working
  • The authors lack acknowledging all the other instrumentation software available. Notably projects like PyMeasure, LantzProject, pyacq, but also uManager for instance.
  • There is no discussion on why they implemented their own solution instead of contributing to ScopeFoundry, for example.
  • The paper builds on PyQtGraph, matplotlib, and numpy, but yet, it only cites Qt
  • The paper misses the fact that they only implement a strict way of working with message-based devices, and no explicit way of working with other type of instruments (NI cards are a paradigmatic example).
  • The need for custom messages could have been explicit, because this is a decision that goes against the SCPI standard, for example. Some discussion on scalability and real reusability of the program could have been nice (CPU usage, memory consumption, max number of simultaneous devices).
  • The program is structured in a way that it makes it very hard to work with 1D or 2D sensors (like a camera or a 1D array), since it all goes around the idea of reading and settings channels.

Program

  • Requirements should specify versions, in the same way python>3.6 is specified, target versions of each package for the environment should be explicit.
  • The paper claims "and many export formats". This is not explicitly documented, what are the formats? Only digging through the code, in the GUI, there are the options: JSON, Pickle, NPY, TXT, HDF. However, HDF is not implemented in the code, and saving data in JSON or TXT must be properly documented. Is data going to be saved as base64, plain ascii?
  • Missing tests. The authors claim they can't make tests for backends unless the devices are connected. Even though I agree with the challenges of writing tests for devices, the fact that later they claim they can run the user interface without devices connected defeats their own argument. Part of the problem of not being able to run tests is the lack of abstraction between different backends and user interfaces. This is something that other packages have already achieved and is worth checking out.
  • The authors also specify in the docs that they have examples and run them periodically. Some of the examples are broken when ran with --dummy, so I wonder what is the value of the examples. Also, the examples are far from covering the entire package. complex_example_a, for instance, specifies a command line argument for the connection_type, but this is not used later on.
  • In the Backend they explicitly use pyvisa-py as the VISA backend, but again this is not documented (line 246 of Backend.py) Also, not allowing to choose the VISA backend defeats the spirit of the VISA specification. I also wonder why using sockets explicitly if VISA can handle TCPIP devices?
  • The README states that the code is linted with black, however running black on the code itself raises 22 files with errors (almost 30% of the total)
  • The NI backend has some issues. It imports the nidaqmx only on Windows systems, it fails, but if it is not a Windows system it raises an exception which is not properly handled. There is a script at the end of the file that is meant to remediate that but it makes no sense to have it in the same file where the nidaqmx is imported.
  • There are many parts of the code that repeat themselves. For example, the Adam_6015 and Adam_6024 have an identical try_connect method.
  • There are patterns that repeat overall, but are not abstracted, for example the checking of the dummy instrument happens all over the place and could be part of the base Backend.
  • Type Hints: They are partially implemented, which makes them pointless and sometimes they are wrong. Many return types are not specified nor in the docstrings nor as type hints. (see this)
  • Some patterns are confusing. For example, in the Rigol backend, a string is returned as "Bad setting" if there is a problem, but also None is returned if things go right, and sometimes a string is returned if things go right. I don't see proper definition and handling of exceptions.
  • Separation of concerns: For example, the Instrument class is defined within the GUI, while I believe it would have been more appropriate to have it in the 'backends' and without relying on Qt. This class is, after all, an abstraction on the behavior of the instruments and may be useful even if you don't build a GUI. This also happens with the saving of data. I wouldn't have guessed that the GUI specifies the data formats.
  • There are some parameters that are hardcoded which makes it hard to understand the rationale behind. For example, the default_state in the DelayGenerator.py.
  • Many methods return None or False to indicate success or failure. This pattern is very risky because if not val evaluates to True in either case. I see an overall lack of proper exception raising and handling.
  • In the docs, it is stated "A backend for an instrument will run in its own thread. It should have
    no Qt references", however the only implementation of threads I found is based on QObjects and QThreads. I strongly suggest to check concurrent.Futures to deal with instruments, it can simplify a lot the current approach.

Documentation

  • There is an overall lack of documentation through the program. For example, the fact that the program requires to define your own commands for the instruments makes documentation even more necessary. For example, looking at the NI device, there is a self-defined command: CHX_V_MAX that is not documented anywhere.
  • The documentation does not follow a common style. Some are missing arguments and return specifications, some follow numpy-style docstrings, etc.
  • To create a new user interface, it is stated as "start with a template" (the same with creating new backends), however there is little guide on what a 'template' is in the case of GUIs. This must be greatly improved since the majority of the program relies on the user interface.
  • Contributing: There should be more information on how and what to contribute. For example, it is not clear if there are rules for pull-requests (for example, should they stem from main or master? is there a development branch?), should the pull-requests be squashed or do you welcome 100 commits per pull-request? A roadmap is always a nice addition in the early stages, since it allows to see what kind of help are you looking for.

@arunpersaud
Copy link

Thanks @aquilesC for the detailed review. A lot to go through ;) I agree with most points and will have to look more into some of the others. Also thanks for pointing out some of the other libraries. I wasn't aware of PyMeasure and that looks very interesting. For a few points I probably have some followup questions and was wondering if you would be available to discuss them (perhaps offline to not spam this issue too much?).

@aquilesC
Copy link

@arunpersaud , sure you can reach out (just check my profile to see how you prefer to do so)! However, bear in mind that my time for coaching is somewhat limited at the moment. You can also check this meta-repository, which has tons of interesting people involved with python+instrumentation, and this catalog of projects

@untzag
Copy link

untzag commented Sep 29, 2020

WOW @aquilesC what amazingly detailed review---just amazing. I'd like to give a few humble "big picture" comments as someone who is interested in this kind of software and is also working in this space.

First off, I agree that ideally you should find an existing framework and contribute to that rather than inventing your own approach for driver abstraction. We have a bunch of competing approaches right now, and what we need most of all is to grow an actually sustainable community around something. The dream, of course, is to have a community that works together within one project to do a whole bunch of hardware enablement that everyone can benefit from. micro-manager and EPICS are examples of communities that have succeeded in developing this critical mass, although in my opinion each of them fails to meet the needs of small, general-purpose labs like we're interested in here.

I'm generally not a fan of monolithic packages that attempt to implement all of their hardware support and graphics together---this leads to packages that are hard to install and not very portable or extensible in my opinion. I myself am currently ripping out hardware support from pycmds and migrating everything to yaq.

At the very least, you should comment on the projects mentioned by @aquilesC and contrast them with hardware-control within the paper.

On the other hand, I actually think that your front-end approach is compelling. The only other package that I know of is PyDM, but other than that I'm not aware of others who are attempting to solve the frontend problem generically. I encourage you to look closer at the existing ecosystem and think carefully about where you can best contribute without reinventing. Perhaps for example you could focus on leveraging an existing driver framework and creating a really polished GUI experience for particle physics.

@untzag
Copy link

untzag commented Sep 29, 2020

Regarding publication, I think I'm less pessimistic than @aquilesC. My understanding is that JOSS does not review based on novelty. I think that hardware-control could be reasonable JOSS publication if the authors meet the following criteria:

  • a much better job covering prior art and comparing hardware-control to those packages within the paper
  • a description in the paper / documentation of the core abstraction that hardware-control uses to define hardware interfaces (commands, state dictionaries)
  • fully working examples with documentation
  • tests of some kind (even smoke tests with "virtual" hardware would be great!)
  • community guidelines
  • typos
  • clear documentation of each kind of hardware supported (DelayGenerator, FlowController etc) including a list of required commands with descriptions

I personally think that things like type hints and exception handling don't need to be perfect for publication. To me it's clear that, while this package has room to grow for sure, it definitely passes JOSS' core requirements:

  • Be open source (i.e., have an OSI-approved license).
  • Have an obvious research application.
  • Be feature-complete (no half-baked solutions) and be designed for maintainable extension (not one-off modifications).
  • Minor 'utility' packages, including 'thin' API clients, and single-function packages are not acceptable.

Perhaps the editor could provide some guidance here? @timtroendle

@untzag
Copy link

untzag commented Sep 29, 2020

The paper misses the fact that they only implement a strict way of working with message-based devices, and no explicit way of working with other type of instruments (NI cards are a paradigmatic example).

@aquilesC can you say more here? I'm not sure I understand your point. I understand that daqmx isn't message based, but I do think I could write a class that parameterizes the functionality of an NI card into hardware-control messages.

@arunpersaud
Copy link

@untzag thanks for the review. Both reviews are very helpful and we will change our project to incorporate them. Perhaps merging the backends into an existing project would be the way to go (which would make our codebase a lot smaller). Also adding tests seems more doable now after I have seen how other projects are handling this. This will take us a while though (including reviewing all the other projects that both of you pointed out). Should we close this and resubmit later or keep this open and let you guys know when we have a newer version ready? Perhaps another question for the editor ;) @timtroendle

@aquilesC
Copy link

@untzag , perhaps I should have made it clearer. I do agree with you, and I followed the checklist. I checked the 'scholarly effort' that mentions the JOSS guidelines. However, there are other things that I could not tick, and I included this sentence in my original review:

As reviewers, we have to follow a checklist to justify our decision (you can see them above), and sadly I do not consider this work to be ready to be published.

Then, the rest of the suggestions are to be taken as that, an opportunity to improve the code and not the mininum effort that needs to be put to get the publication ready. The authors are free to choose the path they desire.

@aquilesC
Copy link

The paper misses the fact that they only implement a strict way of working with message-based devices, and no explicit way of working with other type of instruments (NI cards are a paradigmatic example).

@aquilesC can you say more here? I'm not sure I understand your point. I understand that daqmx isn't message based, but I do think I could write a class that parameterizes the functionality of an NI card into hardware-control messages.

Sorry, the economy of words made the sentence miss the point. In the package they can use an NI card because there is a python wrapper for the underlying C library. The rest of the backends work with message-based devices (at least, the ones I remember checking). If I wanted to include another type of device, such as a camera, or a digitizer that streams data, there is no clear path within the documentation, and the approach they followed in the other backends would not be easy to replicate. I think it is fair to focus on message-based if those are the ones you use the most, but then it should have been nice to mention it because it makes their use-case more explicit.

@timtroendle
Copy link

Thanks for these amazing reviews, @aquilesC and @untzag . Regarding the two questions raised: First, yes, not all points mentioned are necessary for a publication in JOSS. If there were any specific points for which you are unsure whether they are necessary, @arunpersaud , I'd suggest to bring them up here so we can discuss.

However, you already indicated that you are interested in implementing the requested changes and that brings me to the second question: If you need time to implement the changes, @arunpersaud , I suggest to pause this review and to continue whenever you are ready, instead of closing and resubmitting. Would this be a problem for any of the reviewers @aquilesC, @untzag , @garrettj403 ? @arunpersaud , please give us a rough estimation of how long this'll take you.

@untzag
Copy link

untzag commented Oct 1, 2020

I'm happy to review later if that's what @arunpersaud wants to do.

@aquilesC
Copy link

aquilesC commented Oct 2, 2020

Fine with me!

@arunpersaud
Copy link

Let's pause it then and we'll come back here once we updated the code. It's hard for me to estimate how long this will take though. Lot's to review and change and unfortunately working on the code is not the main part of my job. I would say we need a few month at least.

@arfon
Copy link
Member

arfon commented Aug 7, 2021

👋 @arunpersaud – just checking in here. Do you think you might be able to complete your changes in the next month?

@arfon
Copy link
Member

arfon commented Aug 7, 2021

@whedon remind @arfon in one month

@editorialbot
Copy link
Collaborator

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#3129

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3129, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Apr 14, 2022
@arunpersaud
Copy link

also want to thank @untzag, @garrettj403, @aquilesC, @arfon, and @timtroendle for their time and help!

@untzag
Copy link

untzag commented Apr 14, 2022

Congrats @arunpersaud, really great work.

@arfon
Copy link
Member

arfon commented Apr 18, 2022

👋 @arunpersaud – I just made a number of minor updates to your paper here: https://bitbucket.org/berkeleylab/hardware-control/pull-requests/1

@arunpersaud
Copy link

Thanks, just merged them...

@arfon
Copy link
Member

arfon commented Apr 19, 2022

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator

⚠️ Error prepararing paper acceptance.

@arfon
Copy link
Member

arfon commented Apr 19, 2022

@arunpersaud – apologies but it looks like this line got broken by my commit. Could you change L33 back to:

- name: Technische Universität Darmstadt, 64289 Darmstadt, Hesse, Germany

@arunpersaud
Copy link

@arfon just pushed a fix from my laptop (but couldn't run the docker container to check if it worked). Let me know if this didn't fix it and I will work on it later and test before commiting

@arfon
Copy link
Member

arfon commented Apr 20, 2022

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.6399528 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.5281/zenodo.5574486 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator

👋 @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof 👉 openjournals/joss-papers#3148

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#3148, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@arfon
Copy link
Member

arfon commented Apr 20, 2022

@editorialbot accept

@editorialbot
Copy link
Collaborator

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@editorialbot
Copy link
Collaborator

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.02688 joss-papers#3149
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.02688
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Apr 20, 2022
@arfon
Copy link
Member

arfon commented Apr 20, 2022

@aquilesC, @untzag, @garrettj403 – many thanks for your reviews here and to @timtroendle for editing this submission! JOSS relies upon the volunteer effort of people like you and we simply wouldn't be able to do this without you ✨

@arunpersaud @Grant-Giesbrecht – your paper is now accepted and published in JOSS ⚡🚀💥

@arfon arfon closed this as completed Apr 20, 2022
@editorialbot
Copy link
Collaborator

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02688/status.svg)](https://doi.org/10.21105/joss.02688)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02688">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02688/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02688/status.svg
   :target: https://doi.org/10.21105/joss.02688

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX
Projects
None yet
Development

No branches or pull requests