Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pooltool Submission #173

Open
20 of 32 tasks
ekiefl opened this issue Apr 15, 2024 · 39 comments
Open
20 of 32 tasks

Pooltool Submission #173

ekiefl opened this issue Apr 15, 2024 · 39 comments
Assignees

Comments

@ekiefl
Copy link

ekiefl commented Apr 15, 2024

Submitting Author: Evan Kiefl (@ekiefl)
All current maintainers: (@ekiefl)
Package Name: pooltool
One-Line Description of Package: Pooltool is a general purpose billiards simulator crafted specifically for science and engineering.
Repository Link: https://github.com/ekiefl/pooltool
Version submitted: v0.3.3
Editor: @cmarmo
Reviewer 1: @sebastianotronto
Reviewer 2: @eliotwrobson
Archive: TBD
JOSS DOI: TBD
Version accepted: TBD
Date accepted (month/day/year): TBD


Code of Conduct & Commitment to Maintain Package

Description

  • Include a brief paragraph describing what your package does:

Pooltool is a platform to simulate, analyze, and interactively visualize billiards shots. A defining feature is pooltool’s flexible physics architecture, which supports customizable physics models to foster novel applications and encourage community-driven advancements in realism. Pooltool uses parametric table geometries that accommodate custom measurements, and even supports non-traditional table designs built with linear and circular cushion segments. Its robust object-oriented design simplifies data accessibility and analysis, which is complemented by the ability to losslessly encode/decode raw simulation data into common storage formats. Pooltool is performant due to just-in-time compilation, an event-based evolution algorithm, and efficiently designed data structures. Pooltool is packaged with an interactive 3D interface, with a comprehensive suite of controls. Through the interface, one can interactively simulate different billiards games or visualize programmatically-generated shots. Continuously evolving through active maintenance and bolstered by a growing community, pooltool represents a significant stride in the realm of billiards simulation for research purposes.

Scope

  • Please indicate which category or categories.
    Check out our package scope page to learn more about our
    scope. (If you are unsure of which category you fit, we suggest you make a pre-submission inquiry):

    • Data retrieval
    • Data extraction
    • Data processing/munging
    • Data deposition
    • Data validation and testing
    • Data visualization1
    • Workflow automation
    • Citation management and bibliometrics
    • Scientific software wrappers
    • Database interoperability

Domain Specific

  • Geospatial
  • Education

Community Partnerships

If your package is associated with an
existing community please check below:

  • For all submissions, explain how the and why the package falls under the categories you indicated above. In your explanation, please address the following points (briefly, 1-2 sentences for each):

Data processing/munging: Pooltool is a physics simulator that both generates simulation data and provides a convenient, object-oriented design for simplifying data accessibility and analysis, which aligns with data processing/munging. Pooltool uses user-facing data structures, compute efficient algorithms, and lossless serialization/deserialization routines to enhance its performance as a data processing / munging utility for billiards simulation.

Data visualization: Pooltool includes a 3D interactive interface with a comprehensive suite of controls for visualizing and simulating billiards games. This promotes an exploratory approach to billiards simulation, and complete interactive control for interpreting and understanding data generated by the software.

  • Who is the target audience and what are scientific applications of this package?

Billiards, a term describing games like pool and snooker, supports a robust, multidisciplinary research community investigating topics in physics, game theory, computer vision, robotics, and cue sports analytics.

More than just a collection of physics equations, billiards simulation requires algorithms that coordinate the proper usage of these equations, in order to calculate a comprehensive trajectory of the system through time. Accurate billiards simulation is critical for game theory applications (e.g. developing AI players), billiards-playing robots, computer-vision powered data analytics, shot prediction for assistive learning applications and improved TV broadcasting, and more. In short, pooltool is designed for researchers who require accurate, open, and customizable billiards simulation for their research applications.

  • Are there other Python packages that accomplish the same thing? If so, how does yours differ?

In short, no.

The current landscape reveals a stark contrast between the realistic physics seen in some commercially-produced games (i.e. Shooterspool and VirtualPool4), and the limited functionality of open-source projects. Commercial products have little, if any, utility in research contexts, due to closed source code and no open APIs. Conversely, available open source tools lack realism, usability, and adaptability for generic research needs. The most widely cited simulator in research studies, FastFiz, is unpackaged, unmaintained, provides no modularity for custom geometries nor for physical models, offers restrictive 2D visualizations, outputs minimal simulation results with no built in capabilities for introspection, and was custom built for hosting the Association for the Advancement of Artificial Intelligence (AAAI) Computational Pool Tournament from 2005-2008. Billiards offers a visually appealing 3D game experience, realistic physics, and supports customization via Lua scripting. However, as a standalone application, it lacks interoperability with commonly used systems and tools in research. Written in Lua, an uncommon language in the scientific community, it has limited appeal in research settings. The lack of Windows support is another drawback. FooBilliard++ is another 3D game with realistic physics, yet is not a general purpose billiards simulator, instead focusing on game experience and aesthetics. python-billiards is an overly simplistic 2D world. Others suffer from drawbacks already mentioned.

The lack of suitable software for billiards simulation in research contexts, let alone software written in Python, forces researchers to develop case-specific simulators that meet their research requirements, but that fall short of serving the broader community as general purpose simulators. This fragments the research collective, renders cross-study results difficult or impossible to compare, and leads to wasted effort spent reinventing the wheel.

  • If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted:

#171 (@Batalex)

Technical checks

For details about the pyOpenSci packaging requirements, see our packaging guide. Confirm each of the following by checking the box. This package:

  • does not violate the Terms of Service of any service it interacts with.
  • uses an OSI approved license.
  • contains a README with instructions for installing the development version.
  • includes documentation with examples for all functions.
  • contains a tutorial with examples of its essential functions and uses.
  • has a test suite.
  • has continuous integration setup, such as GitHub Actions CircleCI, and/or others.

Publication Options

JOSS Checks
  • The package has an obvious research application according to JOSS's definition in their submission requirements. Be aware that completing the pyOpenSci review process does not guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS.
  • The package is not a "minor utility" as defined by JOSS's submission requirements: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria.
  • The package contains a paper.md matching JOSS's requirements with a high-level description in the package root or in inst/.
  • The package is deposited in a long-term repository with the DOI: 10.5281/zenodo.11171124

Note: JOSS accepts our review as theirs. You will NOT need to go through another full review. JOSS will only review your paper.md file. Be sure to link to this pyOpenSci issue when a JOSS issue is opened for your package. Also be sure to tell the JOSS editor that this is a pyOpenSci reviewed package once you reach this step.

Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?

This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.

  • Yes I am OK with reviewers submitting requested changes as issues to my repo. Reviewers will then link to the issues in their submitted review.

Confirm each of the following by checking the box.

  • I have read the author guide.
  • I expect to maintain this package for at least 2 years and can help find a replacement for the maintainer (team) if needed.

Please fill out our survey

P.S. Have feedback/comments about our review process? Leave a comment here

Editor and Review Templates

The editor template can be found here.

The review template can be found here.

Footnotes

  1. Please fill out a pre-submission inquiry before submitting a data visualization package.

@Batalex
Copy link
Contributor

Batalex commented Apr 16, 2024

Editor in Chief checks

Hi there! Thank you for submitting your package for pyOpenSci
review. Below are the basic checks that your package needs to pass
to begin our review. If some of these are missing, we will ask you
to work on them before the review process begins.

Please check our Python packaging guide for more information on the elements
below.

  • Installation The package can be installed from a community repository such as PyPI (preferred), and/or a community channel on conda (e.g. conda-forge, bioconda).
    • The package imports properly into a standard Python environment import package.
  • Fit The package meets criteria for fit and overlap.
  • Documentation The package has sufficient online documentation to allow us to evaluate package function and scope without installing the package. This includes:
    • User-facing documentation that overviews how to install and start using the package.
    • Short tutorials that help a user understand how to use the package and what it can do for them.
    • API documentation (documentation for your code's functions, classes, methods and attributes): this includes clearly written docstrings with variables defined using a standard docstring format.
  • Core GitHub repository Files
    • README The package has a README.md file with clear explanation of what the package does, instructions on how to install it, and a link to development instructions.
    • Contributing File The package has a CONTRIBUTING.md file that details how to install and contribute to the package.
    • Code of Conduct The package has a CODE_OF_CONDUCT.md file.
    • License The package has an OSI approved license.
      NOTE: We prefer that you have development instructions in your documentation too.
  • Issue Submission Documentation All of the information is filled out in the YAML header of the issue (located at the top of the issue template).
  • Automated tests Package has a testing suite and is tested via a Continuous Integration service.
  • Repository The repository link resolves correctly.
  • Package overlap The package doesn't entirely overlap with the functionality of other packages that have already been submitted to pyOpenSci.
  • Archive (JOSS only, may be post-review): The repository DOI resolves correctly.
  • Version (JOSS only, may be post-review): Does the release version given match the GitHub release (v1.0.0)?

  • Initial onboarding survey was filled out
    We appreciate each maintainer of the package filling out this survey individually. 🙌
    Thank you authors in advance for setting aside five to ten minutes to do this. It truly helps our organization. 🙌


Editor comments

paper.md not yet merged, but this is not blocking https://github.com/ekiefl/pooltool/pull/122/files

@ekiefl
Copy link
Author

ekiefl commented Apr 19, 2024

Hi @Batalex, I've made the following changes:

  1. I've merged paper.md
  2. I made a new release (v0.3.2)
  3. I made a DOI for v0.3.2: https://doi.org/10.5281/zenodo.10995500

I edited the first post with the bumped version (v0.3.2) and the DOI.

Regarding the survey: I already filled this out, but if your records show otherwise, I'm happy to do it again.

With this all finished, I think we're done with pre-review checks. Please let me know if there's anything else left to do.

Thanks for your time.

@ekiefl
Copy link
Author

ekiefl commented May 10, 2024

In anticipating pooltool being reviewed in pyOpenSci, I've given a lot of thought about my licensing and I've decided to switch licenses from GPLv3 to Apache v2. I figured it would be better to do this before any reviews start, so I've made the following changes:

  1. Merged the license change (Change of LICENSE from GPL to Apache ekiefl/pooltool#123)
  2. Made a new release (v0.3.3)
  3. I made a DOI for v0.3.3: https://doi.org/10.5281/zenodo.11171124

I edited the first post with the bumped version (v0.3.3) and the DOI.

@Batalex
Copy link
Contributor

Batalex commented May 20, 2024

Hey @ekiefl,
I am very pleased to announce that @cmarmo will oversee leading the review as the editor! Chiara will be your first line responder from now on, though you are welcome to ask anyone if you have any question during the process.
Happy review!

@cmarmo
Copy link
Member

cmarmo commented May 20, 2024

Hi @ekiefl , nice to meet you, and thanks for submitting to pyOpenSci!
I'm Chiara and I'm going to take care of the process as editor.
I'm looking for reviewers right now and hope to be back to you by the end of the week.

@ekiefl
Copy link
Author

ekiefl commented May 23, 2024

@Batalex, thanks for your help thus far.

@cmarmo, thanks so much for continuing the effort :)

Please let me know if I can help in any way.

@cmarmo
Copy link
Member

cmarmo commented May 28, 2024

Hi @ekiefl! I just want to let you know that I had some promising answers from possible reviewers: they asked some more time to evaluate the engagement.
Finger crossed, I'll be back in a week with good news.... 🤞

@ekiefl
Copy link
Author

ekiefl commented May 30, 2024

That's fantastic! Fingers crossed

@cmarmo
Copy link
Member

cmarmo commented Jun 6, 2024

Sorry @ekiefl bad luck for this first round of seeking for reviewers .... I'm following some new tracks ....

@ekiefl
Copy link
Author

ekiefl commented Jun 6, 2024

Too bad! Ok sounds good. Thanks for all your effort!

@cmarmo
Copy link
Member

cmarmo commented Jun 24, 2024

Hi @ekiefl, I am happy to share that I've found our first reviewer for Pooltool.
@sebastianotronto kindly accepted to review pooltool for pyOpenSci: thank you very much Sebastiano! 🙏
Feel free to introduce yourself here , while I'm pursuing my quest for a second reviewer... 🙂

@sebastianotronto
Copy link

Hi everyone!

I am Sebastiano, nice to meet you. I work as a scientific software developer in the private sector, but less than two years ago I defended my PhD in pure Math (number theory). I code mainly in C# (for work) and C (for fun), but I have some experience with Python too.

This is going to be my first review for pyOpenSci.

@ekiefl
Copy link
Author

ekiefl commented Jun 25, 2024

Hey @sebastianotronto, thank you so much for agreeing to be a reviewer! I also defended my PhD a couple of years ago, not in pool (unfortunately) but in biophysics 😃

@cmarmo
Copy link
Member

cmarmo commented Jul 8, 2024

Hello everybody, I'm waiting for some answers from potential second reviewers.
In general we try to have both reviewers working at the same time, but in order to move forward, and if @sebastianotronto is ready, perhaps we can start with the first review: @ekiefl would this be ok with you?

Thank you very much for your understanding 🙏

@ekiefl
Copy link
Author

ekiefl commented Jul 8, 2024

@cmarmo, first of all let me just say thank you for your continued support in trying to find reviewers. I really appreciate your dedication to open scientific software 🙏

@sebastianotronto please feel free to start your review whenever, and thanks again for agreeing to review. Let's hope you're a trendsetter and we get a second reviewer soon enough.

@sebastianotronto
Copy link

sebastianotronto commented Jul 9, 2024

Sure, here is my review! Apologies if I misunderstood some requirements from the standard template.

Package Review

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README.
  • Installation instructions: for the development version of the package and any non-standard dependencies in README.
  • Vignette(s) demonstrating major functionality that runs successfully locally.
  • Function Documentation: for all user-facing functions.
  • Examples for all user-facing functions.
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING.
  • Metadata including author(s), author e-mail(s), a url, and any other relevant metadata e.g., in a pyproject.toml file or elsewhere.

Readme file requirements
The package meets the readme requirements below:

  • Package has a README.md file in the root directory.

The README should include, from top to bottom:

  • The package name
  • Badges for:
    • Continuous integration and test coverage,
    • Docs building (if you have a documentation website),
    • A repostatus.org badge,
    • Python versions supported,
    • Current package version (on PyPI / Conda).

NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that the a badge for pyOpenSci peer-review will be provided upon acceptance.)

  • Short description of package goals.
  • Package installation instructions
  • Any additional setup required to use the package (authentication tokens, etc.)
  • Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.
    • Brief demonstration of package usage (as it makes sense - links to vignettes could also suffice here if package description is clear)
  • Link to your documentation website.
  • If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.
  • Citation information

Usability

Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole.
Package structure should follow general community best-practices. In general please consider whether:

  • Package documentation is clear and easy to find and use.
  • The need for the package is clear
  • All functions have documentation and associated examples for use
  • The package is easy to install

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests:
    • All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install.
    • Tests cover essential functions of the package and a reasonable range of inputs and conditions.
  • Continuous Integration: Has continuous integration setup (We suggest using Github actions but any CI platform is acceptable for review)
  • Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
    A few notable highlights to look at:
    • Package supports modern versions of Python and not End of life versions.
    • Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)

For packages also submitting to JOSS

Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.

The package contains a paper.md matching JOSS's requirements with:

  • A short summary describing the high-level functionality of the software
  • Authors: A list of authors with their affiliations
  • A statement of need clearly stating problems the software is designed to solve and its target audience.
  • References: With DOIs for all those that have one (e.g. papers, datasets, software).

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing: 4.5


Review Comments

Here are my general comments. In bold the parts that require an answer or action.

Installation

I had minor hiccups during the installation because the tool relies on panda3d version 11 (not released yet). The installation instructions are clear, but I tried using an unsupported python version first (3.12) and that did not work out. It would be nice if the tool could be installed without workaround for panda3d, and possibly on the latest python version. Is it just a matter of waiting for panda3d v11 to be released?

Graphical mode

The tool also offers a graphical mode that can be "played" like a 3D game. The overall effect is very realistic and it appears clear that the physics is accurately simulated. I also liked the mouse + keyboard controls, personally I found them intuitive.

The author does not make specific claims about performance, but the graphical mode was quit demanding on my laptop, with extreme lag. On the other hand, it worked like a charm on my much more powerful desktop.

On my laptop I also encountered some graphical glitches (see pictures below):

  • Help menu (but also other text) is cut out when playing with unusual resolution (I was keeping two windows side by side)
  • Stick disappeared after entering visual mode, only reappears in shot mode or English mode.

help-cutout
no-stick

A couple of times I started a shot that took a very long time to calculate, i.e. more than 10 seconds on my desktop. Both times this happened at the very first shot of the game, default mode. I suspect this is because the starting position of many balls all close together generates a huge number of events compared to other shots (the author explained in his blog that the simulation algorithm is event-based). I am curious if the author is aware of this and if there are plans to improve the sometimes slow first shot.

Documentation and API

The documentation is clear and concise and it makes it easy to get started with the package. It is very much still in the making and some parts are marked as incomplete, but the essentials are there and well written.

The documentation contain an "Hello World" example script to use the APIs, and an in-depth explanation of this example. Very nice way to get started.

The API documentation is automatically generated and it contains plenty of examples. As I noted above, not all functions have examples, but considering how fine-grained the API is I do not think it would be necessary.

I also enjoyed reading the author's blog posts where he explains the physics of the problem and the algorithm used for the simulator. Although not part of the official documentation of the package, it is definitely a nice addition.

Code

The code is generally high-quality and easy to understand, despite the size of the package.

Possible performance improvement

I have also been thinking about some possible performance improvement in the main simulation loop. I don't know if my suggestion would actually lead to any significant improvement, but it is interesting to think about it.

Let's consider the simplified case where there are n balls and the only possible events are ball-ball collisions. Right now the main simulation loop would compute, at every iteration, O(n^2) possible collisions. I am wondering if it is possible to not re-do every collision check at each iteration, saving some work.

Say for example that at the first iteration it was found that the next collision is between ball 1 and ball 2. At the second iteration, instead of computing all O(n^2) ball-ball collisions, we could compute just the O(n) collisions involving ball 1 or ball 2. Indeed any two other balls, say for example ball 3 and ball 4, are not affected (yet) by the collision of ball 1 and ball 2, so their next collision time will remain the same computed in the first iteration.

This could cut down significantly the number of computations done at each iteration.

@ekiefl
Copy link
Author

ekiefl commented Jul 13, 2024

@sebastianotronto, I wanted to respond sooner rather than later to let you know that I've read your review, and I'm super appreciative of how thorough it is.

These are great suggestions and I'm going to start making my way through your comments to try and improve some of the areas you've pointed out.

@cmarmo
Copy link
Member

cmarmo commented Jul 15, 2024

Dear @ekiefl, I am really happy to announce that we have our second reviewer. 🥳
@eliotwrobson kindly accepted to help us as a reviewer: thank you so much! 🙏

As summer is striking in the Northern hemisphere (which is essentially the one involved here), let's say that it would be wonderful to have the review submitted before August 5th. @eliotwrobson, is that ok with you? Thanks for your help!

@eliotwrobson
Copy link

eliotwrobson commented Jul 16, 2024

@cmarmo I think August 5 sounds reasonable! I'll let you know if anything comes up. My schedule right now is a little in flux but I should be able to make that.

EDIT: looking at the previous review, it seems like there has been significant discussion of usability and the GUI already. I'm going to focus my review on performance, since that was highlighted as an issue, and try to give more concrete suggestions there.

@eliotwrobson
Copy link

eliotwrobson commented Jul 17, 2024

Package Review

Please check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide

  • As the reviewer I confirm that there are no conflicts of interest for me to review this work (If you are unsure whether you are in conflict, please speak to your editor before starting your review).

Documentation

The package includes all the following forms of documentation:

  • A statement of need clearly stating problems the software is designed to solve and its target audience in README.
  • Installation instructions: for the development version of the package and any non-standard dependencies in README.
  • Vignette(s) demonstrating major functionality that runs successfully locally.
  • Function Documentation: for all user-facing functions.
  • Examples for all user-facing functions.
  • Community guidelines including contribution guidelines in the README or CONTRIBUTING.
  • Metadata including author(s), author e-mail(s), a url, and any other relevant metadata e.g., in a pyproject.toml file or elsewhere.

Readme file requirements
The package meets the readme requirements below:

  • Package has a README.md file in the root directory.

The README should include, from top to bottom:

  • The package name
  • Badges for:
    • Continuous integration and test coverage,
    • Docs building (if you have a documentation website),
    • A repostatus.org badge,
    • Python versions supported,
    • Current package version (on PyPI / Conda).

NOTE: If the README has many more badges, you might want to consider using a table for badges: see this example. Such a table should be more wide than high. (Note that the a badge for pyOpenSci peer-review will be provided upon acceptance.)

  • Short description of package goals.
  • Package installation instructions
  • Any additional setup required to use the package (authentication tokens, etc.)
  • Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.
    • Brief demonstration of package usage (as it makes sense - links to vignettes could also suffice here if package description is clear)
  • Link to your documentation website.
  • If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.
  • Citation information

Usability

Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole.
Package structure should follow general community best-practices. In general please consider whether:

  • Package documentation is clear and easy to find and use.
  • The need for the package is clear
  • All functions have documentation and associated examples for use
  • The package is easy to install

Functionality

  • Installation: Installation succeeds as documented.
  • Functionality: Any functional claims of the software been confirmed.
  • Performance: Any performance claims of the software been confirmed.
  • Automated tests:
    • All tests pass on the reviewer's local machine for the package version submitted by the author. Ideally this should be a tagged version making it easy for reviewers to install.
    • Tests cover essential functions of the package and a reasonable range of inputs and conditions.
  • Continuous Integration: Has continuous integration setup (We suggest using Github actions but any CI platform is acceptable for review)
  • Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
    A few notable highlights to look at:
    • Package supports modern versions of Python and not End of life versions.
    • Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)

For packages also submitting to JOSS

Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.

The package contains a paper.md matching JOSS's requirements with:

  • A short summary describing the high-level functionality of the software
  • Authors: A list of authors with their affiliations
  • A statement of need clearly stating problems the software is designed to solve and its target audience.
  • References: With DOIs for all those that have one (e.g. papers, datasets, software).

Final approval (post-review)

  • The author has responded to my review and made changes to my satisfaction. I recommend approving this package.

Estimated hours spent reviewing:


Review Comments

  • It was hard to find the package installation instructions (as there wasn't a quick-start guide on the README), and I had to scroll to the docs section of the README. Also, there is no README on pypi, so it made it hard to tell that this was associated with this package.
  • It would be nice to have fewer steps to install the dev version. I see that poetry is being added in a PR, so that should alleviate this issue.
  • For a package this size, it would be nice to have all of the tests in a separate folder.
  • The pyproject.toml is missing metadata (like author email), but that looks like it will be fixed once the poetry PR is merged.
  • Some badges are missing. In addition to the ones on the checklist, you should have a badge for the discord there too so people can find it right away.
  • The API documentation seems very complete, but the formatting is a little awkward and hard to read. For example, this indentation looks strange:

image

Since there are many large functions, it might be easier to switch to giving each function it's own page, the way that the numpy docs are set up. More math in the docs would be good.

  • 3.12 support would be good, along with test coverage. It looks like your CI only runs 3.8, you should definitely expand to different versions and control versions of dependencies somehow. 3.8 Is going end of life pretty soon.
  • I think there needs to be a couple more vignettes. It would be awesome to have a few that show tasks that could arise in some type of research.
  • I know that the type annotations aren't read by numba and that library has it's own type conversions, but it would be nice to have them on the numba jit-ed functions to make them easier to read.

@eliotwrobson
Copy link

@ekiefl I've just started poking around the package, looks cool so far! I was having trouble getting the GUI to work on my machine, do you happen to have code that produces the slowdown that @/sebastianotronto pointed out? I'd like to do some profiling.

@ekiefl
Copy link
Author

ekiefl commented Jul 17, 2024

@cmarmo thank you so much for your exhaustive search!

And @eliotwrobson, I really appreciate you agreeing to review. Thank you!


I was having trouble getting the GUI to work on my machine

That's a bit troubling. What are your specs?

do you happen to have code that produces the slowdown that @/sebastianotronto pointed out?

@sebastianotronto I will respond to your review more fully soon, but since @eliotwrobson is bringing up the performance hits that often occur (several seconds or more), I figured it makes sense to provide some further information for both of you.

While it's definitely possible and probably practical to speed up the algorithm via your suggestion of caching event times to avoid recalculation, I believe the large slowdowns you sometimes observe have to with the package numba.

Numba offers just-in-time compiling of python code into LLVM machine code, which offers performance speeds similar to what one would expect with a low level language like C or Rust or something. And pooltool uses numba extensively in its shot evolution algorithm (probably in around 20+ functions). This is great, but since these functions are just-in-time compiled, the first call is always slow because the function is compiled on the spot. Subsequent calls during that python process are fast because the function is compiled.

Fortunately, numba offers the ability to cache the compiled functions, so that a function compiled on a Tuesday can be re-used on a Wednesday. Pooltool makes full use of this feature to try and ameliorate the problem of re-compiling every time someone boots up a pooltool script. This saves a ton of time, however there is still a significant overhead just to load the cached code.

TLDR: numba can cause the first shot to be slow either because the numba functions needs to compiled (really slow) or they've previously been compiled but they still need to be loaded from cache (just kind of slow). Pile on the fact that the break shot is intrinsically slower because there are more events to calculate, and we can end up with some long wait times.

Partial solution: While numba compilation and cache loading is inevitable, I think the UX could be improved by calculating an "offscreen" shot as soon as the GUI is opened and introducing a "please wait" screen while the numba's housekeeping business is occurring.


@eliotwrobson, now that you have more context, if you are still interested in profiling the code, you might try poking around in sandbox/break.py for inspiration. For example, this will calculate 20 breaks and calculate the average time taken for the shot evolution algorithm:

python sandbox/break.py --no-viz --time-it

If you like profiling with cachegrind files, you can create a cachegrind of a break shot with

python sandbox/break.py --no-viz --profile-it

In the script you can see lines where I "burn" a break shot to avoid numba compilation/cache loading from being included in the timing/profiling:

        # Burn a run (numba cache loading)
        pt.simulate(shot, continuous=True)

@sebastianotronto
Copy link

@ekiefl thanks for the explanation! Your proposed solutions sounds very reasonable.

@eliotwrobson
Copy link

@ekiefl Thank you for your fast response! I ran the command you sent and I think you're right. Even on my system, the output of the benchmarks came out to fractions of a second, so most of the time probably is just cache retrieval. The flamegraph I got is below:

flamegraph

I was able to get this during a cached run, and it seems to confirm the discussion on the blog, that most of the time is spent computing all of the events that can happen and finding the minimum. If performance becomes an issue, I think this is a very straightforward option to parallelize using numba, but this might not become relevant unless you want to add better support for larger tables with more balls or something. Overall, I think that having a loading step before opening the GUI is probably the way to go, and anyone running large simulations from a program will just have the first run take slightly longer.

Also, I think I configured something weird on my terminal, so I wouldn't worry too much about it not working for me. Since everything seems alright, I'm going to start going down my checklist 👍🏽

@eliotwrobson
Copy link

eliotwrobson commented Jul 27, 2024

@ekiefl just as a heads up, I haven't quite finished up my review, but I've been updating the comments section at the bottom, and that has most of the feedback that will be in the final review. Feel free to get started on those items while I'm finishing up the final checks, and that should help reduce the turnaround time for acceptance 🚀

Edit: things got a little crazy on my end, but should be able to finalize the review before the weekend 👍

@ekiefl
Copy link
Author

ekiefl commented Jul 28, 2024

Thanks for the update @eliotwrobson and for your valuable comments! I'm already working on the revisions :)

@eliotwrobson
Copy link

@ekiefl As a heads up, I found some time to finish my review! It looks like you were able to address most of the items from my previous round of comments. The only things remaining from my end are the following:

  1. I can see the code coverage badge but it doesn't show the coverage itself, maybe you just need to push again or something? I don't think this is really a huge deal.
  2. I think that the JOSS submission is solid, but having some information about the usage of the package would be nice. Including some code from a vignette would be a good way to add this without too much extra effort (which ties into the third item), and maybe having some screenshots from the GUI would be cool too.
  3. The other remaining checklist items are all related to vignettes for the package. I was able to get the game interface working reasonably well, and the discussion in the documentation was easy to follow, but I couldn't find vignettes for a simulation task that might arise in research not using the GUI. Having a couple of these would be awesome.

@ekiefl
Copy link
Author

ekiefl commented Aug 7, 2024

Response to reviewers (take one)

Thanks again for your reviews @sebastianotronto and @eliotwrobson. I'll respond to your reviews in the same post.

And just a warning, @eliotwrobson, I had written the majority of this before you had finished your review, so I discuss unchecked boxes that are now checked. So some of this you can skip straight over, but I thought I would keep it just for posterity.

Common themes

Reading through everything, there are a couple of major themes that I'm picking up on that I want to address at a high level. Those things are versioning, ease of installation, and lack of obvious research application.

(1) Versioning

Thanks both of you for pointing out that pooltool doesn't work on Python 3.12. The culprit was the dependency pprofile, which is used in a couple of places in sandbox/. I think it's a great package, but until vpelletier/pprofile#51 is merged, I'm forced to drop it as a dependency. Any profiling code has been removed, rather than replaced with something else ¯_(ツ)_/¯.

Package dependencies are now defined by poetry. I did some manual dependency version testing and settled on the following dependencies:

[tool.poetry.dependencies]
python = ">=3.9,<3.13"
panda3d = [
  {platform = "darwin", version=">=1.10.13,<1.11"},
  {platform = "linux", version = "1.11.0.dev3444", allow-prereleases = true, source = "panda3d-archive"},
  {platform = "win32", version = "1.11.0.dev3444", allow-prereleases = true, source = "panda3d-archive"},
]
panda3d-gltf = ">=1.2.0"
panda3d-simplepbr = ">=0.12.0"
numpy = ">=1.26.0"  # Lower bound for 3.12 (https://github.com/numpy/numpy/releases/tag/v1.26.0)
numba = ">=0.59.0"  # # Lower bound for 3.12 (https://numba.readthedocs.io/en/latest/user/installing.html#version-support-information)
scipy = ">=1.12.0"  # Required for numba. Lower bound for 3.12 is officially 1.11, but in practice seems to be 1.12 on MacOS
attrs = ">=21.3.0"
cattrs = ">=22.1.0"
msgpack = ">=1.0.0"  # cattrs structuring fails with msgpack<1
msgpack-numpy = ">=0.4.8"
pyyaml = ">=5.2"
click = ">=8.0.0"
Pillow = ">=6.2.0"
h5py = ">=3.10"

The lower bounds were determined judiciously, where I tried to pick the oldest versions that were compatible with Python 3.12. This maximizes the target space for dependency resolution so as to maximize the odds that pooltool can share the same python environment with other packages, rather than being relegated as a standalone application. I could have taken this even further by creating Python version conditioned dependencies (e.g. if Python <3.9, use Numpy <1.26), but I thought that was too much to manage for such a small project.

Speaking of which, numpy 1.26.0 is the lower bound for 3.12, but it drops support for 3.8. I've followed suit and dropped support for 3.8, which makes pooltool's Python range >=3.9,<3.13.

To test over this version range, I am now running pooltool's test suite in a matrix of builds, where I test 3.9 and 3.12 for Linux, MacOS, and Windows. With this test matrix now in place, testing for 3.13 and beyond will be easy-peasy.

The development version has correspondingly experienced a massive bump from 3.8.10 to 3.12.4. No more living in the past!

All the changes mentioned can be found in ekiefl/pooltool#124 and ekiefl/pooltool#125.

(2) Ease of installation

Installation got mentioned quite a few times, which obviously isn't a great sign. For me, the main takeaways from reading your comments were that installation (1) doesn't work on 3.12, (2) perhaps messy/long, and (3) somewhat hard to find.

As mentioned, pooltool now works on 3.12. There also now exists improved signage in the installation instructions and in the README.md that clarify which Python versions pooltool supports.

The pip installation instructions previously had an unconventional additional step (for Linux and Windows), requiring uninstallation of panda3d 1.10.x and reinstallation of 1.11.x--a prerelease. For context, this is required due to this mouse-related feature being absent in Windows and Linux: panda3d/panda3d#928. Thanks to poetry, this messiness is now handled with the following dependency specification:

panda3d = [
  {platform = "darwin", version=">=1.10.13,<1.11"},
  {platform = "linux", version = "1.11.0.dev3444", allow-prereleases = true, source = "panda3d-archive"},
  {platform = "win32", version = "1.11.0.dev3444", allow-prereleases = true, source = "panda3d-archive"},
]

The result is now that what used to be this:

# Windows and Linux
pip install pooltool-billiards
pip uninstall panda3d -y
pip install --pre --extra-index-url https://archive.panda3d.org/ panda3d

# MacOS
pip install pooltool-billiards

Is now this:

# Windows and Linux
pip install pooltool-billiards --extra-index-url https://archive.panda3d.org/

# MacOS
pip install pooltool-billiards

Once 1.11 is released, the --extra-index-url can be removed altogether :)

The developer instructions have also been improved and shortened thanks to this and thanks to poetry.

(3) Lack of obvious research application

It seems that pooltool has not been presented in a way that exemplifies its use in research. First of all, it would be very helpful to hear what both of you have to say about this, since this wasn't expanded upon in either of your comments, I'm currently just inferring this based on the unchecked box related to whether pooltool satisfies JOSS's definition of having an obvious research application.

So further elaboration would be very helpful. But in the meantime, I would like to bring to light several things that in my opinion justify not only pooltool's potential to be applied in research, but also its already demonstrated application within research.

If you haven't already read it, I tried my best in the JOSS draft paper to describe billiards-related research and how billiards simulation plays a central role in this multidisciplinary field. (By the way, there exists a GitHub action that renders this paper, so if you want to view it rendered, you can download the artifact from this run). Anyways, here are some relevant excerpts that help define the research landscape of billiards:

Excerpt 1:

Billiards, a broad classification for games like pool and snooker, supports a robust, multidisciplinary research and engineering community that investigates topics in physics, game theory, computer vision, robotics, and cue sports analytics. Central to these pursuits is the need for accurate simulation.

Excerpt 2:

Billiards simulation serves as the foundation for a wide array of research topics that collectively encompass billiards-related studies. Specifically, the application of game theory to develop AI billiards players has led to simulations becoming critical environments for the training of autonomous agents [@Smith2007-jq; @Archibald2010-av; @Fragkiadaki2015-oh; @Archibald2016-sd; @Silva2018-cm; @Chen2019-dk; @Tung2019-zu]. Meanwhile, billiards-playing robot research, which relies on simulations to predict the outcome of potential actions, has progressed significantly in the last 30 years and serves as a benchmark for broader advancements within sports robotics [@Sang1994-jv; @Alian2004-zs; @Greenspan2008-wg; @Nierhoff2012-st; @Mathavan2016-ck; @Bhagat2018-bx]. Billiards simulations also enrich computer vision (CV) capabilities, facilitating precise ball trajectory tracking and enhancing shot reconstruction for player analysis and training (for a review, see @Rodriguez-Lozano2023-hq). Additionally, through augmented reality (AR) and broadcast overlays, simulations have the potential to extend their impact by offering shot prediction and strategy formulation in contexts such as personal training apps and TV broadcasting, creating a more immersive understanding of the game.

Since billiards-related research is niche, I can understand how pooltool's application to research may not be obvious, however I think the "Statement of Need" section explains things by explaining the current shortcomings of the research landscape:

Unfortunately, the current billiards simulation software landscape reveals a stark contrast between the realistic physics seen in some commercially-produced games (i.e., Shooterspool and VirtualPool4) and the limited functionality of open-source projects. Commercial products have little, if any, utility in research contexts due to closed source code and a lack of open APIs. Conversely, available open source tools lack realism, usability, and adaptability for generic research needs. The most widely cited simulator in research studies, FastFiz[^1], is unpackaged, unmaintained, provides no modularity for custom geometries nor for physical models, offers restrictive 2D visualizations, outputs minimal simulation results with no built-in capabilities for introspection, and was custom built for hosting the Association for the Advancement of Artificial Intelligence (AAAI) Computational Pool Tournament from 2005-2008 [@Archibald2010-av]. Another option, Billiards[^2], offers a visually appealing 3D game experience, realistic physics, and supports customization via Lua scripting. However, as a standalone application, it lacks interoperability with commonly used systems and tools in research. Written in Lua, an uncommon language in the scientific community, it has limited appeal in research settings. The lack of Windows support is another drawback. FooBilliard++[^3] is a 3D game with realistic physics, yet is not a general purpose billiards simulator, instead focusing on game experience and aesthetics. Other offerings suffer from drawbacks already mentioned.

The lack of suitable software for billiards simulation in research contexts forces researchers to develop case-specific simulators that meet their research requirements but fall short of serving the broader community as general purpose simulators. This fragments the research collective, renders cross-study results difficult or impossible to compare, and leads to wasted effort spent reinventing the wheel. pooltool fills this niche by providing a billiards simulation platform designed for speed, flexibility, and extensibility in mind.

Pooltool was created specifically to fill in this gap in research tools.

But perhaps more importantly, I would like to point out the ways in which pooltool has already been applied in research:

  1. Researchers at the Shanghai AI Laboratory have developed pool-playing AI agents using pooltool as the simulation environment. Pooltool has been incorporated as a formal dependency in their popular MCTS-based RL project, LightZero. In the words of one of their main developers:

    The pooltool simulation environment [...] holds great potential as a research benchmark in the field of reinforcement learning. It not only offers scientific value but also encompasses an element of fun. We believe integrating it as a long-term expandable benchmark environment will significantly contribute to the advancement of our research endeavors

    (A simulation environment for billiards: PoolTool opendilab/LightZero#182 (comment))

  2. Researchers at the University of Edinburgh are using pooltool to test the physical reasoning capabilities of large language models (LLMs). The precedent for this research is described by them in SimLM, and a small teaser of their work using pooltool is described here.

  3. A student at Oxford University, Alistair White-Horne, created a "Real-Time Pool and Snooker Assistant" using pooltool as the underlying visualization and simulation module. Their thesis can be found here

One thing that could highlight pooltool as a research tool could be turning some analyses in sandbox/ into proper vignettes that are hosted on the documentation. For example, the contributor @zhaodong-wang added a quantiative verification of the "30-degree rule", a practical rule for determining ball directions post-collision: ekiefl/pooltool#119. This could be a two birds one stone effort, since @eliotwrobson rightfully noticed a lack of vignettes. How does that sound?

Also, perhaps a section could also be added to the README.md called "Projects using pooltool" or something like that, since referencing ways in which pooltool is being used in research environments is both proof of and advertisement for pooltool's applicability to research.

The checklist

With those three large topics out of the way, here is the checklist section of my response.

Since the checklist is the same for both of you, I'm going to respond to each unchecked box and tag one or both of you depending on whether you had checked the box. In the next sections I'll address each of your individual comments.

  • Vignette(s) demonstrating major functionality that runs successfully locally.

@eliotwrobson, I haven't done it yet, but I'm going to add a couple of vignettes to the documentation, since currently there is really just the "Hello World" vignette.

  • Examples for all user-facing functions.

How should we proceed @cmarmo? @sebastianotronto is right that not every user-facing function contains an example. But as they've pointed out, that's because the API is very granular: "As I noted above, not all functions have examples, but considering how fine-grained the API is I do not think it would be necessary".

The vast majority of the codebase is exposed in the API, so if we take the term "user-facing functions" to mean those in the API, this requirement would entail an enormous amount of work. In my opinion, the effort would extend beyond what I assume to be the intent of this checkbox: to ensure users have resources to learn by example. For what it's worth, I think the documentation in its current state satisfies this very important requirement (example-based documentation is something I care a lot about).

  • Metadata including author(s), author e-mail(s), a url, and any other relevant metadata e.g., in a pyproject.toml file or elsewhere.

@sebastianotronto, @eliotwrobson: Added in ekiefl/pooltool#124. Can be seen here: https://github.com/ekiefl/pooltool/blob/3b76a311166dba32bd34771fe2ab82b6e3fbf71e/pyproject.toml#L19-L37

  • Badges for:

    • Continuous integration and test coverage,
    • Docs building (if you have a documentation website),
    • A repostatus.org badge,
    • Python versions supported,
    • Current package version (on PyPI / Conda).

@sebastianotronto, @eliotwrobson: Added in ekiefl/pooltool#127. Can be seen here: https://github.com/ekiefl/pooltool

  • Package installation instructions

@sebastianotronto, @eliotwrobson: I read you guys loud and clear, the installation instructions definitely need more visibility. But I would prefer that they are not duplicated in both the README.md and the documentation (https://pooltool.readthedocs.io/en/latest/getting_started/install.html). This increases maintenace burden by violating single-source-of-truth. To achieve what I hope is the best of both worlds, I've added an explicit # Installation section to the README that links to the installation page in ekiefl/pooltool#127. In particular, this commit: ekiefl/pooltool@79085c3

  • Any additional setup required to use the package (authentication tokens, etc.)

@sebastianotronto, @eliotwrobson: With the adoption of poetry, I hope there now exists no additional setup that needs documenting (I say that with my fingers crossed).

  • Descriptive links to all vignettes. If the package is small, there may only be a need for one vignette which could be placed in the README.md file.

    • Brief demonstration of package usage (as it makes sense - links to vignettes could also suffice here if package description is clear)

@sebastianotronto, @eliotwrobson: Once vignettes are added to the documentation, I will link them somewhere in the README.md

  • If applicable, how the package compares to other similar packages and/or how it relates to other packages in the scientific ecosystem.

@sebastianotronto, here is a paragraph from the JOSS paper.md draft that situates pooltool with respective to similar software:

"Unfortunately, the current billiards simulation software landscape reveals a stark contrast between the realistic physics seen in some commercially-produced games (i.e., Shooterspool and VirtualPool4) and the limited functionality of open-source projects. Commercial products have little, if any, utility in research contexts due to closed source code and a lack of open APIs. Conversely, available open source tools lack realism, usability, and adaptability for generic research needs. The most widely cited simulator in research studies, FastFiz[^1], is unpackaged, unmaintained, provides no modularity for custom geometries nor for physical models, offers restrictive 2D visualizations, outputs minimal simulation results with no built-in capabilities for introspection, and was custom built for hosting the Association for the Advancement of Artificial Intelligence (AAAI) Computational Pool Tournament from 2005-2008 [@Archibald2010-av]. Another option, Billiards[^2], offers a visually appealing 3D game experience, realistic physics, and supports customization via Lua scripting. However, as a standalone application, it lacks interoperability with commonly used systems and tools in research. Written in Lua, an uncommon language in the scientific community, it has limited appeal in research settings. The lack of Windows support is another drawback. FooBilliard++[^3] is a 3D game with realistic physics, yet is not a general purpose billiards simulator, instead focusing on game experience and aesthetics. Other offerings suffer from drawbacks already mentioned."

Do you think I should try and synthesize some of this info into the README, or do you think it's fine to let sleeping dogs rest?

  • Citation information

@sebastianotronto: Added in ekiefl/pooltool#127 (ekiefl/pooltool@11e9efd).

  • The package is easy to install

@eliotwrobson, I think the instructions (both from source and with pip) are simpler and easier to find now!

  • Packaging guidelines: The package conforms to the pyOpenSci packaging guidelines.
    A few notable highlights to look at:

    • Package supports modern versions of Python and not End of life versions.
    • Code format is standard throughout package and follows PEP 8 guidelines (CI tests for linting pass)

@sebastianotronto, @eliotwrobson: I think things are now much more compliant. PRs ekiefl/pooltool#124, ekiefl/pooltool#125, ekiefl/pooltool#127, ekiefl/pooltool#128, and ekiefl/pooltool#129 have transformed pooltool into a package that, at least from what I can tell, complies with the package guidelines set forth by pyOpenSci. There are (at least) two exceptions though: pooltool has no developer guide (described as ideal to have) and no conda package (which I am keen to add, but currently is blocked by this grayskull issue: conda/grayskull#463).

Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.

The package contains a paper.md matching JOSS's requirements with:

  • A short summary describing the high-level functionality of the software
  • Authors: A list of authors with their affiliations
  • A statement of need clearly stating problems the software is designed to solve and its target audience.
  • References: With DOIs for all those that have one (e.g. papers, datasets, software).

@sebastianotronto, @eliotwrobson: I gave my spiel about this above in "Common themes".

Comments by @sebastianotronto

The tool also offers a graphical mode that can be "played" like a 3D game. The overall effect is very realistic and it appears clear that the physics is accurately simulated. I also liked the mouse + keyboard controls, personally I found them intuitive.

Thanks for your kind words :)

The author does not make specific claims about performance, but the graphical mode was quit demanding on my laptop, with extreme lag. On the other hand, it worked like a charm on my much more powerful desktop.

Yeah, I've noticed some hardware can't quite manage the GUI.

Help menu (but also other text) is cut out when playing with unusual resolution (I was keeping two windows side by side)

The GUI assumes a specific aspect ratio. For example, if users change the window size by clicking and dragging corners, an area-preserving transformation takes place to ensure the ratio. But if there are additional constraints that force a different aspect ratio, we're out of luck.

Stick disappeared after entering visual mode, only reappears in shot mode or English mode.

I appreciate the feedback on playability. It's nice to know what people find intuitive/unintuitive. In this case, I modelled the behavior after VirtualPool4, so this isn't actually a bug.

I have also been thinking about some possible performance improvement in the main simulation loop. I don't know if my suggestion would actually lead to any significant improvement, but it is interesting to think about it.

I mentioned this in my last post, but this is a really good idea! I will try and implement this and get back to you with the results.

Comments by @eliotwrobson

For a package this size, it would be nice to have all of the tests in a separate folder.

I've never worked with a separate tests folder, but it's about time I made the change. I will implement this.

The API documentation seems very complete, but the formatting is a little awkward and hard to read. For example, this indentation looks strange:

Agreed. I feel like the closing bracket should be left indented one more than it is. IIRC I was having trouble getting it properly formatted. If I can't get it to work I should just drop the multi-line call signature format and just do single line with line breaks, like pandas: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html.

Since there are many large functions, it might be easier to switch to giving each function it's own page, the way that the numpy docs are set up.

I hear you, and I agree that would be nice. But the furo sphinx theme used by pooltool is well known and trusted for big projects like pip, attrs, urllib3, black, and more. So this may be a matter of taste. I would also add that I am open to to experimenting with different formats, but my journey with doc rendering not been a smooth, so I am resistant to doing it myself.

I think there needs to be a couple more vignettes. It would be awesome to have a few that show tasks that could arise in some type of research.

Good call. Thanks for pointing this out. I feel like these vignettes will really help highlight the research elements of pooltool, which is ultimately what it was designed for.

I know that the type annotations aren't read by numba and that library has it's own type conversions, but it would be nice to have them on the numba jit-ed functions to make them easier to read.

This would be helpful for me, too! I will also experiment with specifying the types according to their string-based system: https://numba.pydata.org/numba-doc/latest/reference/types.html

I can see the code coverage badge but it doesn't show the coverage itself, maybe you just need to push again or something? I don't think this is really a huge deal.

Yeah, this was annoying! I will poke around more.

I think that the JOSS submission is solid, but having some information about the usage of the package would be nice. Including some code from a vignette would be a good way to add this without too much extra effort (which ties into the third item), and maybe having some screenshots from the GUI would be cool too.

Good call. I'll revisit the text after I finish the other bullet points.

The other remaining checklist items are all related to vignettes for the package. I was able to get the game interface working reasonably well, and the discussion in the documentation was easy to follow, but I couldn't find vignettes for a simulation task that might arise in research not using the GUI. Having a couple of these would be awesome.

Yeah, I'm totally on board with that.

My TODOs

I wanted to get my response out before I had necessarily addressed all the issues, just to keep things out in the open. I think getting it out early could prove especially useful for getting feedback on my plan, just to make sure I'm not barking up the wrong tree. So I've made this to-do list, which I welcome anyone to make suggestions to.

@cmarmo
Copy link
Member

cmarmo commented Aug 7, 2024

Thank you @ekiefl for your prompt answer and the todo summary!
I'm letting @sebastianotronto and @eliotwrobson comment about your todo list.

is right that not every user-facing function contains an example. But as they've pointed out, that's because the API is very granular: "As I noted above, not all functions have examples, but considering how fine-grained the API is I do not think it would be necessary".

We experienced similar situations with other consistent packages. As we try to enforce a review schedule, my suggestion here is to add some of the missing examples to the documentation, in order to signify the "work in progress" status, and let the rest for future improvements.... you may even want to organize some sprints and call for contributors in that case...

I have labeled this issue as "awaiting changes" : let's touch base in three weeks, at August 28th, to see if reviewers are happy with your changes.

Does this sound reasonable?

@sebastianotronto
Copy link

Hi @ekiefl, thank you for your response!

After your comment I checked the recent changes and ticked a few more checkboxes. Also, I would like to point out that some of the unchecked boxes are (or were) so because the requested information was not present in the README file. But I agree with you that having it in the docs and linking to it in the README is a good approach. @cmarmo can the requirements be considered satisfied if the information is only linked to in the README?

Thanks for improving the installation process, I tested it now and the one-liner works!

As for the research applications, I admit that at the time of my first review I did not understand that the project was also submitted to JOSS, hence the missing ticks. The goal of the project is clear to me and I understand the research applications, all good on that side!

For now I am very happy with the changes you made, I'll check back in a couple of weeks for the items in your to-do list :)

@eliotwrobson
Copy link

Still reading but all of your proposed solutions for issue 3 sound really good 👍🏽 Mentioning something about these in your JOSS paper would make it stronger as well.

@cmarmo
Copy link
Member

cmarmo commented Aug 12, 2024

can the requirements be considered satisfied if the information is only linked to in the README?

@sebastianotronto sure! The important thing is that the information is accessible from the README: links are an acceptable option.

@ekiefl
Copy link
Author

ekiefl commented Aug 31, 2024

Update 1

Hi everyone,

Just wanted to give everyone an update on the slow (but at least somewhat consistent) progress.

Implemented a wait screen when compiling/loading the numba function cache

Due to pooltool compiling numba functions (described in this comment), there was sometimes a large delay when calculating the first shot. This gave the appearance that the shot was taking forever to calculate, when really, it was just one-time numba function compiling preamble. To separate this from the shot calculation itself, I made the following change: when the user first enters the interactive interface, a dummy shot is calculated in the background, during which time a loading screen appears explaining the wait time.

ekiefl/pooltool#136

It looks like this:

image

Event caching

@sebastianotronto had the idea of having an event cache so the same event isn't re-calculated at each step of the simulation. This was a very good idea and I'm happy to report some significant gains in speed. The details can be found in ekiefl/pooltool#133, but here are speed improvements:

The results yield improvements in simulation time for random simulations, using ball number as a parameter for simulation complexity:

image

The improvement is more dramatic when the number of balls is dramatically increased:

image

I'm happy that caching always seems worth it, regardless of ball number.

Other

Next

I've updated my above to-do checklist and will next be working on adding vignettes to the docs. Thank you for everyone's patience.

@sebastianotronto
Copy link

Great job @ekiefl ! I am glad that caching gives a performance improvement :)

@ekiefl
Copy link
Author

ekiefl commented Sep 14, 2024

Update 2

We're getting really close.

Established vignette doc infrastructure

I think jupyter notebooks provide an great hands-on approach to learning, so I really wanted the vignettes to be rendered as notebooks. This was made possible by nbsphinx, and the infrastructure for adding these to the docs was done in ekiefl/pooltool#138.

Some details about this:

  1. No .ipynb files are git-tracked. Instead, they are converted to a python percent format using jupytext. This makes line diffs clearer, reduces file size, and enables python linting on notebooks.
  2. The notebooks are executed when the docs are built, and then rendered as HTML.

Added two vignettes

A new section of the documentation now exists, called Examples. It displays a gallery of all the notebook examples/vignettes. Currently there are two.

The first is an analysis of the so-called "30 Degree Rule" and was added in ekiefl/pooltool#138.

image

The second is an analysis of shot difficulty for straight in shots and was added in ekiefl/pooltool#140

image

Updated the paper

I've updated the paper.md:

  1. There is now a figure displaying screenshots of the interface
  2. I added a section called "Implementation" that gives implementation details about the core features of pooltool
  3. I added a "Usage" section that itself contains no code, but references the pooltool documentation. I did not want to include any pooltool code in the paper because the API is subject to change and the paper content is permanent. Although code usage is seen in other JOSS papers, it goes against the JOSS guidelines, which state "software documentation such as API (Application Programming Interface) functionality should not be in the paper and instead should be outlined in the software documentation"

These changes are found in ekiefl/pooltool#141

Here is a rendered version of the paper: paper.zip

What next?

This morning my CI tests are crashing on the Linux builds due to a buildbot timeout error, so I'll resolve that in due time before merging ekiefl/pooltool#141. (ekiefl/pooltool@87791a6)

Other than that, my TODO list is, for the time being, complete!

@sebastianotronto and @eliotwrobson, could you please double check that your checklists are up to date? If there are any remaining boxes to be ticked, could you please make a suggestion for what you would like to see done?

@eliotwrobson
Copy link

@ekiefl everything looks great! Checked off all of my boxes ✅

@sebastianotronto
Copy link

@ekiefl All good for me too! ✅

@cmarmo
Copy link
Member

cmarmo commented Sep 16, 2024

Congratulations @ekiefl !

🎉 Pooltool has been approved by pyOpenSci! Thank you for submitting Pooltool and many thanks to @sebastianotronto and @eliotwrobson for reviewing this package! 😸

Author Wrap Up Tasks

There are a few things left to do to wrap up this submission:

  • Activate Zenodo watching the repo if you haven't already done so.
  • Tag and create a release to create a Zenodo version and DOI.
  • Add the badge for pyOpenSci peer-review to the README.md of . The badge should be [![pyOpenSci Peer-Reviewed](https://pyopensci.org/badges/peer-reviewed.svg)](https://github.com/pyOpenSci/software-review/issues/issue-number).
  • Please fill out the post-review survey. All maintainers and reviewers should fill this out.
It looks like you would like to submit this package to JOSS. Here are the next steps:
  • Editor Task Once the JOSS issue is opened for the package, we strongly suggest that you subscribe to issue updates. This will allow you to continue to update the issue labels on this review as it goes through the JOSS process.
  • Login to the JOSS website and fill out the JOSS submission form using your Zenodo DOI. When you fill out the form, be sure to mention and link to the approved pyOpenSci review. JOSS will tag your package for expedited review if it is already pyOpenSci approved.
  • Wait for a JOSS editor to approve the presubmission (which includes a scope check).
  • Once the package is approved by JOSS, you will be given instructions by JOSS about updating the citation information in your README file.
  • When the JOSS review is complete, add a comment to your review in the pyOpenSci software-review repo here that it has been approved by JOSS. An editor will then add the JOSS-approved label to this issue.

Editor Final Checks

Please complete the final steps to wrap up this review. Editor, please do the following:

  • Make sure that the maintainers filled out the post-review survey
  • Invite the maintainers to submit a blog post highlighting their package. Feel free to use / adapt language found in this comment to help guide the author.
  • Change the status tag of the issue to 6/pyOS-approved6 🚀🚀🚀.
  • Invite the package maintainer(s) and both reviewers to slack if they wish to join.
  • If the author submits to JOSS, please continue to update the labels for JOSS on this issue until the author is accepted (do not remove the 6/pyOS-approved label). Once accepted add the label 9/joss-approved to the issue. Skip this check if the package is not submitted to JOSS.
  • If the package is JOSS-accepted please add the JOSS doi to the YAML at the top of the issue.

If you have any feedback for us about the review process please feel free to share it here. We are always looking to improve our process and documentation in the peer-review-guide.

@cmarmo
Copy link
Member

cmarmo commented Sep 16, 2024

Thank you to all of you for all your work!
@ekiefl please let me know if you are willing to connect with the pyOpenSci community on our slack, I can send you an invite.
Also, if you are planning to describe your review experience in a blog post let us know so we can cross post on our socials.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: pyos-accepted
Development

No branches or pull requests

5 participants