Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for ROS2 on Fedora IoT with Robotics SIG #31

Open
nullr0ute opened this issue Feb 16, 2024 · 4 comments
Open

Support for ROS2 on Fedora IoT with Robotics SIG #31

nullr0ute opened this issue Feb 16, 2024 · 4 comments
Assignees
Labels
AI Requirements around AI enhancement New feature or request hardware Hardware requirements

Comments

@nullr0ute
Copy link
Member

The Fedora Robotics SIG is quite active and are working to package up ROS2 into Fedora and are interested in how they could work with us to support it on Fedora IoT. It's also a target in EPEL for the enterprise use case as well as it's used in factory automation and all sorts of industrial robotics, not just fun cars :)

@nullr0ute nullr0ute added the enhancement New feature or request label Feb 16, 2024
@nullr0ute
Copy link
Member Author

Some of the questions we have here is:

  • Do we support it in a container or a ostree layer
  • How do we support the hardware access
  • How do we deal with media and things like complex cameras often used in these use cases
  • Access and support for things like AI HW (GPU, NPU, FPGA etc)

@nullr0ute nullr0ute added hardware Hardware requirements AI Requirements around AI labels Feb 16, 2024
@miabbott
Copy link
Member

@say-paul has shown interest in enabling ROS2 for IoT; he might be able to answer some of these questions.

Additionally @odra or @lukewarmtemp may be able to help with some of these questions

@lukewarmtemp
Copy link

lukewarmtemp commented Feb 20, 2024

I'll take a stab at trying to answer some of these questions, but @say-paul and @odra feel free to pitch in/correct any misunderstandings.

Do we support it in a container or a ostree layer

With regards to progress right now, we've been able implemented ros2 in an ostree layer, as seen in the following repo: https://gitlab.com/fedora/sigs/robotics/ros2-fedora-coreos. This means that rebasing to this native container image on Fedora CoreOS will allow you to run ros2 natively on your system, not within a container. However, it works by installing the rolling distribution of ros2 from source with a few patches applied within the Dockerfile itself, not an RPM. My opinion is to support it in an ostree layer because it provides an ease of access to GPIO ports and you would be able to leverage the immutable nature of an ostree based system for security.

Note: The rest of my answers are based on the assumption that we'd support it in an ostree layer.

How do we support the hardware access

Since ros2 would be natively installed on your system with the previously mentioned repo/container image for Fedora CoreOS, it would be easier to access GPIO ports compared to running ros2 in container. This repo is the most recent proof of concept we've worked on: https://github.com/lukewarmtemp/ros2-fcos-gpio. Documentation isn't clearly written out at the moment, so feel free to ask clarifying questions as I work to update it. However, in a nutshell, it uses the container image mentioned in the previous question/answer (https://gitlab.com/fedora/sigs/robotics/ros2-fedora-coreos) as the base image, installs the necessary python packages to interface with an Adafruit FT232H, modifies/creates configuration files, and copies a ros2 workspace that contains a simple package that reads and prints ultrasonic sensors. Here is a quick demo video showing that it works (planning to film/upload a better video in the future): https://photos.app.goo.gl/RUyda86kX8ZKy3dx9. An Adafruit FT232H was used to proxy a Raspberry Pi since it is much cheaper, but I would imagine that interfacing with the GPIO ports on a Raspberry Pi would follow a similar process.

How do we deal with media and things like complex cameras often used in these use cases

Not too sure about this question sorry. I'd like to assume that it would follow a similar process to the Adafruit FT232H ultrasonic sensor proof of concept, but I haven't really played around with cameras and more complex devices.

Access and support for things like AI HW (GPU, NPU, FPGA etc)

Some of our work may be related to this, at least with regards to supporting something like a GPU, although I haven't been able to work on it during the last few months. We've been looking to integrate autoware on Fedora CoreOS, as seen in this discussion post: https://github.com/orgs/autowarefoundation/discussions/3651#discussioncomment-7212357. autoware is interesting since it's built on top of ros2 and requires certain dependencies such as Nvidia CUDA and NVIDIA Container Toolkit. As a proof of concept, I think testing if autoware can run on ublue-nvidia: https://universal-blue.org/images/nvidia/ would be a good first step in seeing whether support for GPU in the context of ros2/autoware is possible. Once again, I started looking into this, but haven't touched it in a while.

These are my initial thoughts with regards to each of the questions. Feel free to share more questions/comments/concerns, especially considering that most of my answers are dependent on my opinion to support ros in an ostree layer, not a container.

@say-paul
Copy link
Member

So with centos-bootc I am able to layer ROS2 and able create an installer out of it.: [discussion(https://discussion.fedoraproject.org/t/2024-03-28-fedora-robotics-sig-meeting/110059).
looking forward to fedora-bootc image to do that on fedora.

@say-paul say-paul self-assigned this May 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI Requirements around AI enhancement New feature or request hardware Hardware requirements
Projects
None yet
Development

No branches or pull requests

4 participants