You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder how we are going to distribute high-resolution bathymetry data.
One thing we could do is create a script that 1) downloads the dataset 2) runs the Batymetry_Converter on the dataset and 3) installs the output to a share directory at build time. Once we have such a script, we would just need to modify the CMakeLists.txt file for the worlds package such that its executes during catkin build. This has worked well for other projects where worlds needed to be auto-generated and where large 3D models needed to be made available to downstream users. Happy to discuss this further if required.
I thought it would be more appropriate to discuss it here on the issue page.
First of all, it would not be in a high priority but wanted to discuss and learn about it :)
Wouldn't your solution require a user to download and convert the dataset even though they are not using it? Or would it only download/convert when running a launch file related to it?
Thanks for creating this ticket to discuss the matter. 😄
You're right, with this approach, the download/conversions will happen for all the worlds. We can decouple the download and conversion into separate scripts. The downloaded datasets can be stored in ~/.gazebo or somewhere else appropriate. The downloader script should then check if the dataset already exists in the cache and only download the files if it does not. We would then only store a yaml (for example) file in the dave_worlds pkg for each world that we want to generate. This yaml file will follow a schema to specify which dataset should be used, parameters for the conversion, etc. The converter script will parse this schema and invoke the Bathymetry_Converter script accordingly. In the future we could also have a GUI to perform operations like crop or even mark spwan locations for robots. This GUI would read/write to the same yaml file.
We've adopted this workflow for a different project where we want users to be able to generate custom 3D worlds from 2D drawings which they can annotate with features/models using a GUI. Here's the reference for that: https://github.com/open-rmf/rmf_traffic_editor
Another alternative is for us to upload all the sdf models + meshes currently in dave_object_models, to the Dave fuel collection which can be automatically downloaded during build or launch.
Just throwing some ideas out there. Let me know what you think!
One thing we could do is create a script that 1) downloads the dataset 2) runs the Batymetry_Converter on the dataset and 3) installs the output to a share directory at build time. Once we have such a script, we would just need to modify the
CMakeLists.txt
file for theworlds
package such that its executes duringcatkin build
. This has worked well for other projects where worlds needed to be auto-generated and where large 3D models needed to be made available to downstream users. Happy to discuss this further if required.Originally posted by @Yadunund in #202 (comment)
The text was updated successfully, but these errors were encountered: