-
-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: scene_synthesizer: A Python Library for Procedural Scene Generation in Robot Manipulation #7561
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
Software report:
Commit count by author:
|
Paper file info: 📄 Wordcount for ✅ The paper includes a |
License info: ✅ License found: |
|
👋 @clemense, @AlexanderFabisch, and @Mechazo11 - This is the review thread for the paper. All of our communications will happen here from now on. Please read the "Reviewer instructions & questions" in the first comment above. Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines. The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention #7561 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package. We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule. |
Review checklist for @AlexanderFabischConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Comments on the paper:
|
I am done with my review. I think the tool is a nice contribution and it's easy to use. I opened several issues in the repository that have all been quickly addressed by @clemense . I have some comments on the paper that you find below my checklist. |
@crvernon What's the process regarding "Comments on the paper", should i just answer them in this thread here? |
@clemense - you can either incorporate the changes directly in the paper if you agree with the suggestions (or if not) discuss why you didn't feel the suggested changes were appropriate here in this thread. Thanks! |
Review checklist for @Mechazo11Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Thanks @Mechazo11! I see @AlexanderFabisch is still working through some topics as well. |
Thank you @AlexanderFabisch! |
@AlexanderFabisch Thank you for your comments! Here my answers:
That's a great question and my answer might not be satisfying: Academic robotics research is largely driven by PhD theses. Although people in general tend to agree that data is the (most) important ingredient in the deep learning era it is impossible to get a PhD by solely focussing on data generation. Instead data generation is a necessary nuisance, and the "intellectual" / scientifcally respected part is the neural network architecture, training process, and resulting application. Case in point: The preprint of this work was rejected by ArXiv (!) due to it not being "scholarly" enough.
The paper has a "Features & Functionality" section with high-level descriptions. I'd rather not duplicate the more detailed documentation in the paper since code documentation is better kept in conjuction with code.
This is a good question, but again very tough to answer. IMO it depends on the downstream task. If the ultimate task is to learn a cat detector vs a bi-pedal running policy, the metrics will be very different. I think that no single metric can capture all of these use cases. What I see a lot in the current wave of LLM-driven scene generators is e.g. using the CLIP score of rendered images as a metric - but it's obvious that this already has a number of shortcomings.
The software was used the following ways:
That's right, the only kitchen-specific thing in the library are the procedural scenes and some of the procedural assets. The reason there's more kitchen-specific stuff in the library is due to the fact that existing scene generators and existing scene datasets oftentimes avoid kitchens (and focus on other parts of an apartment, bedrooms etc.) due to the highly constraint nature of kitchen furniture and layouts. Currently, the paper doesn't limit itself to kitchen scenes - and the examples in the docs are also not kitchen-specific.
I changed the references and wrote the full name of these conferences. |
Maybe there was a misunderstanding on my side: I thought there were similar software packages that are optimized for visual scenes, but not for physical simulations. I assumed it based on this part of the statement of need: "purely generative models still lack the ability to create scenes that can be used in physics simulator [..]. Other procedural pipelines either focus on learning visual model". My question is how that differs. What differentiates a scene generator for physical simulation from a scene generator for visual "simulation"?
That's crazy. It's more an engineering task, but it is foundational work in today's robotics research. However, to give it a more scientific character, I believe you should focus a bit on how we can evaluate scene generators, so that other people have a way to quantify improvement. That's why I asked for this. Maybe it's too much to ask for in this paper though.
I'd suggest to add that to the paper.
... and this as well. |
Ah, got it. The physical simulation needs things like collision geometry, mass information, center of mass, friction, restitution, articulation information (joints and their properties, damping, limits, maximum efforts, velocities etc).
I'm a bit torn on this. Again, it's complicated since scene generation has different objectives, depending on the downstream application/task. The software itself doesn't provide any metrics or support to evaluate scene generators. This is still (and IMO will be for a long time) a fuzzy research area. Also, JOSS explicitly states that "Your paper must not focus on new research results accomplished with the software." and that the software "supports the functioning of research instruments or the execution of research experiments". Keeping the distinction between research software and research results is - IMO - best served by not bloating the paper with random musings about potential metrics and rankings.
Done!
Done! |
@editorialbot generate pdf |
Could you add that to the paper as well? I think then my review is finished! |
Ok, I just added this. Thanks! |
@editorialbot generate pdf |
I also added the example asked for by @Mechazo11 in the @crvernon I don't see any open requests from the reviewers. Let me know if I need to do anything else. Thanks! |
@crvernon Let me know if I can/need to do anything else to push this over the finish line. Thank you! |
Submitting author: @clemense (Clemens Eppner)
Repository: https://github.com/NVlabs/scene_synthesizer
Branch with paper.md (empty if default branch):
Version: 1.11.4
Editor: @crvernon
Reviewers: @AlexanderFabisch, @Mechazo11
Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@AlexanderFabisch & @Mechazo11, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @crvernon know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @AlexanderFabisch
📝 Checklist for @Mechazo11
The text was updated successfully, but these errors were encountered: