Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add feedback to ExecuteTaskSolution action #652

Closed
wants to merge 2 commits into from

Conversation

DaniGarciaLopez
Copy link
Contributor

Addresses #475: adding feedback to the ExecuteTaskSolution action.

Instead of executing all subtrajectories at once, we now execute them sequentially and publish feedback after each subtrajectory finished successfully. Actually, this approach revealed a hidden issue: the allowed_start_tolerance was only checked once at the beginning of the task, not for each subtrajectory. I discovered this after one of our joints started to fail in one stage (this is one of the main reasons of this port moveit/moveit2#3309).

I'm aware that calling executeAndMonitor for each subtrajectory may have performance implications. However, as far as I could test it in our setup, I didn't see much difference. What do you think? Maybe in case it matters we should add a parameter to allow/disallow publishing feedback. However, in that case, we should address the issue previously mentioned.

Since these changes are highly ROS-dependent and I’ve only tested them in ROS2, this PR targets the ros2 branch. Once approved, it should be straightforward to submit another PR for backporting the changes to main for ROS1.

Copy link
Contributor

@rhaschke rhaschke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to address to issues:

  1. adding feedback during ExecuteTaskSolution action
    Why don't you simply augment the callback function as in Provide feedback during ExecuteTaskSolution action #653?
  2. Check allowed_start_tolerance for every sub trajectory.
    Why do you think this is necessary? The whole set of subtrajectories is executed in one go. Hence it should be sufficient to validate the starting point only.

@DaniGarciaLopez
Copy link
Contributor Author

  1. adding feedback during ExecuteTaskSolution action
    Why don't you simply augment the callback function as in Provide feedback during ExecuteTaskSolution action #653?

I was thinking that execCallback was a better place to publish the feedback because:

  • You need a goal_handle to publish feedback in ROS2. Although, we could simply pass the goal_handle to constructMotionPlan.
  • I was planning to further expand the feedback message with additional fields like stage_id, cost, and comment, and publish the feedback also before the subtrajectory executes. In most cases, I think it's sufficient to receive feedback only when each subtrajectory finishes. However, if you need to monitor which stage is currently executing, it's better to also receive feedback at the beginning. So, we could publish feedback before execution starts and then publish it again when it finishes, incrementing sub_id to indicate that this subtrajectory has completed. What are your thoughts on this?

The second approach could be feasible using effect_on_success, but I think it’s more understandable to do it within execCallback. However, performance-wise, I don't really like splitting the subtrajectories unless necessary. Maybe your implementation is the best way to go. I'm happy to close this PR in that case and proceed with yours.

2. Check allowed_start_tolerance for every sub trajectory.
Why do you think this is necessary? The whole set of subtrajectories is executed in one go. Hence it should be sufficient to validate the starting point only.

Correct me if I'm wrong, but if the controller does not actuate a joint correctly, it could start another subtrajectory that violates the allowed_start_tolerance. In our case, we're simply using a plain GripperCommand action server to actuate the joint, rather than wrapping it around a ControllerInterface from ros2_control. Maybe that's why it was failing, because we're missing the corresponding check that ControllerInterface does.

@rhaschke
Copy link
Contributor

  • I was planning to further expand the feedback message with additional fields like stage_id, cost, and comment

These fields were intended for inspection in rviz only. What is their use during execution?

  • I was planning to publish the feedback also before the subtrajectory executes.

That's a good idea in general and I have implemented this in MoveIt1 for a private project.

If the controller does not actuate a joint correctly, it could start another subtrajectory that violates the allowed_start_tolerance.

If so, the controller should report a failure.

@DaniGarciaLopez
Copy link
Contributor Author

DaniGarciaLopez commented Feb 10, 2025

These fields were intended for inspection in rviz only. What is their use during execution?

So a different process/node can subscribe to the feedback and track the progress of the execution. We're particularly interested in accessing the cost of the subtrajectory, which, in most cases, represents the time required to complete it.

Another way to achieve this—without passing all those new values into the message—could be to add a service to ExecuteTaskSolutionCapability that retrieves the current Solution being executed. This way, a later process with just the sub_id could retrieve all the necessary parameters.

Is there a more reliable way to determine the time a subtrajectory will take to complete? For example, in cases where the stage uses a distance-based cost. If so, I think it might also be good for users to add something like execution_time and estimated_time_remaining to the feedback message, similar to what Nav2 does.

If so, the controller should report a failure.

Gotcha! No issue here then 😄

@rhaschke
Copy link
Contributor

Another way to achieve this—without passing all those new values into the message—could be to add a service to ExecuteTaskSolutionCapability that retrieves the current Solution being executed. This way, a later process with just the sub_id could retrieve all the necessary parameters.

A process interested in solution details could simply listen to the /execute_task_solution/goal topic. However, this kind of inspection is not possible in ROS2 anymore, right?

We're particularly interested in accessing the cost of the subtrajectory, which, in most cases, represents the time required to complete it.

The cost is not meant to relate to time! The execution time of each SubTrajectory can be easily read from the time_from_start field of the trajectory waypoints.

@DaniGarciaLopez
Copy link
Contributor Author

A process interested in solution details could simply listen to the /execute_task_solution/goal topic. However, this kind of inspection is not possible in ROS2 anymore, right?

As far as I know, this is not possible. The action goal is handled via a request/response which is a one-to-one communication between the client and the server (although some progress is being made to enable introspection). The only topics exposed by the action server are /execute_task_solution/_action/feedback and /execute_task_solution/_action/status, both of which are actually hidden topics.

However, even if introspection were possible, the monitoring process would need to be listening from the beginning before the request is made, right? It wouldn't be possible to have the monitoring process jump in while the execution is running.

Having a service that provides the current Solution would be useful in this case. The monitoring process could start at any time during execution, request the Solution, and begin listening to the feedback while knowing exactly at which stage the execution is currently at.

The cost is not meant to relate to time!

That's right. I thought that cost::TrajectoryDuration was the default cost for most stages, but it turns out it's actually cost::PathLength, my bad.

The execution time of each SubTrajectory can be easily read from the time_from_start field of the trajectory waypoints.

What do you think about using time_from_start to further expand the feedback message:

uint32 sub_id
uint32 sub_no

# total execution time of the whole task
builtin_interfaces/Duration execution_time 

# time remaining to finish the execution
builtin_interfaces/Duration estimated_time_remaining

I think it would be useful for most users.

@rhaschke
Copy link
Contributor

What do you think about using time_from_start to further expand the feedback message?

That's a reasonable addition.

@rhaschke
Copy link
Contributor

Closed via #653.

@rhaschke rhaschke closed this Feb 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants