AttackoftheFrankaMultiAttack.mp4
Our project is to configure a robot to autonomously wield a lightsaber to help the Republic fight the Separatists, where the setup of the workspace is configured by a human. The robot holds the lightsaber in its grippers to knock over members of the Separatists Army, represented by red blocks placed upright on a table near the Franka robot. The robot (Franakin Skywalker) will also need to avoid the blocks representing members of the Republic, marked by upright blue blocks. The system uses computer vision to differentiate the blocks.
- Robot picks up its lightsaber, recognizes which block it must knock over, and knocks over the “enemy” blocks without colliding into its “ally” blocks on the attack swing
- Robot is able to avoid multiple allies and knock over multiple enemies.
- Robot calculates whether knocking an enemy over will cause the falling block to collide with an ally, and therefore does not swing in that style (i.e. prioritize leaving allies standing higher than knocking over enemies)
- Robot will check all its attack styles before giving up on the attack and moving to next enemy
Upon launching the attack_of_the_franka
launch file, the Intel RealSense D435i camera and Franka robot nodes will start running. An RVIZ simulation of the Franka with the walls and ceiling will appear, as well as an RVIZ visualization of the transformations generated by the computer vision processing. Calling the /pickup_lightsaber
service will command the robot to draw the lightsaber from its sheath. Then, after calling the /look_for_enemy
service, the robot will start planning attacks on all of the enemy red blocks. The decision tree of the workflow will evaluate three attack styles: a left swing, right swing, and stab. It will run through a sequence to determine which attack to perform in order to ensure it doesn’t knock over any blue blocks. If it is unable to attack an enemy, it will move on to the next one, until it is done with all reachable enemies.
The enemy/ally detection is performed using an Intel RealSense D435i camera and OpenCV. The camera_processor
node uses the RGB image from the camera, converted to HSV, to draw contours around specific color objects in the scene. The centroids of these objects are then calculated, giving their position in the RGB image. The RealSense's aligned depth image is used to deproject the pixels in the RGB image into real world coordinates. For reliability, several filters are used (depth, contour area, location, and opening/closing) to ensure only the relevant objects are detected. AprilTags are also used to determine the robot's position in the scene and the work area bounds.
The motion is controlled by the robot_control
node. The lightsaber draw motion is initiated by calling the /pickup_lightsaber
service. The lightsaber is added as a collision object in the scene and the robot moves towards the lightsaber through a specific end-effector waypoint pose. Then the lightsaber gets removed from the scene as a collision object, the robot grasps the lightsaber, and the lightsaber is added as an attached collision object. The Franka arm then returns to home.
By calling the /look_for_enemy
service, all the allies in the scene will be added as collision objects. Then, the robot looks to the leftmost enemy, detected and labeled by the camera_processor
node, and decides which of 3 path planning options to take. Two of the options are the left and right swings, both commanded by end-effector waypoint poses. These poses are determined by the locations detected through the transforms from the camera node. The last option is a stabbing motion, which is commanded through a combination of the position of the enemy and set joint position ranges. The decision sequence works by first attempting to plan a left swing, then a right swing, and then a stabbing motion. The robot makes this decision based on where the allies are in the scene and if it can knock over the enemy block without knocking over any ally blocks. If the robot decides it cannot knock over an enemy block without knocking over any allies, then it will move onto the next one. The robot will continue to perform this decision sequence until all the reachable enemies have been knocked down.
Environmental factors such as lighting can have a heavy effect on computer vision systems, and the systems must be designed robustly enough to be able to handle different conditions that they might encounter.
Differences between real hardware and simulation are crucial to understand, especially when it comes to motion. Real life systems do not behave perfectly, and tolerances that can be achieved in simulation may be unattainable in reality. The team encountered this throughout the project, but especially when commanding fast motions of the arm, which produced larger errors in position than appeared in simulation.
The most impactful future improvement to this project would be to increase the robustness of the motion planning algorithms. With arbitrary enemy and ally positions, there are many scenarios and edge cases, not all of which the team was able to test. Planning for more of these edge cases and better defining specific motions to handle them would improve the performance of the system.
To clone all necessary repositories, clone this repository into the src
directory in your workspace root. Then from the workspace root directory, run the command:
vcs import . < src/attack-of-the-franka/project.repos
(This assumes you don't change the name of the repository, update the path accordingly if you do.)
If you don't have vcstool
installed, install it with:
sudo apt install python3-vcstool
ros2 launch attack_of_the_franka attack_of_the_franka.launch.py
ros2 launch attack_of_the_franka realsense.launch.py
ros2 launch attack_of_the_franka robot.launch.py
ros2 service call /pickup_lightsaber std_srvs/srv/Empty
ros2 service call /look_for_enemy std_srvs/srv/Empty
ros2 service call /move_to_home std_srvs/srv/Empty
Type: ament_python
camera_processor
- Performs image processing for ally and enemy detection based on color.
- Gets workspace area transforms and the robot transform
common.py
- A common library for functions/values used by all nodes
robot_control
- Runs the state machine related to motion and interacts with moveit_interface API to plan and attack enemies. It processes camera information to locate allies and enemies and provides helpful services to do a variety of tasks with the lightsaber and blocks interaction.
realsense.launch.py
- Launches the nodes needed to read in information from the RealSense camera and AprilTags to recognize the table and block locations relative to the robot base
robot.launch.py
- Launches the
robot_control
node and other programs to view the RVIZ simulation of the Franka robot
- Launches the
attach_of_the_franka.launch.py
- combines the above two launch files to set up all the nodes needed to attack enemies
move_to_home
(std_srvs.srv.Empty): move the robot to a predetermined home positionpickup_lightsaber
(std_srvs.srv.Empty): command the robot to follow waypoints to pick up the fixed lightsaber with the end-effectorlook_for_enemy
(std_srvs.srv.Empty): check for any enemies detected in the planning scene and begin to calculate how to attack if possiblegripper_open
(std_srvs.srv.Empty): move the robot end-effector position to opengripper_close
(std_srvs.srv.Empty): move the robot end-effector position to closegripper_grasp
(std_srvs.srv.Empty): move the robot end-effector together until it's grasping an object between themwaypoints
(std_srvs.srv.Empty): command the robot to move to a predetermined waypointmove_to_pose
(moveit_testing_interfaces/srv/MoveToPose): move robot to specific position and orientationjoint_waypoint
(std_srvs.srv.Empty): command specific joint positionsmove_to_position
(moveit_testing_interfaces/srv/MoveToPosition): move robot to specific positionmove_to_orientation
(moveit_testing_interfaces/srv/MoveToOrientation): move robot to specific orientationupdate_obstacles
(moveit_testing_interfaces/srv/UpdateObstacles): add obstacles to sceneupdate_persistent_obstacles
(moveit_testing_interfaces.srv.UpdateObstacles): add obstacle to scene permanentlyupdate_attached_obstacles
(moveit_testing_interfaces.srv.UpdateAttachedObstacles): add obstacle attached to robot linkshome_waypoint
(std_srvs.srv.Empty): plan to a hard coded home value in the callbackadd_walls
(std_srvs.srv.Empty): add walls and ceiling collision objects to the planning sceneremove_separate_lightsaber
(std_srvs.srv.Empty): remove the lightsaber as separate collision objectadd_separate_lightsaber
(std_srvs.srv.Empty): add lightsaber as a separate collision objectremove_attached_lightsaber
(std_srvs.srv.Empty): remove the lightsaber attached to the end-effector in the planning scene in RVIZadd_attached_lightsaber
(std_srvs.srv.Empty): add the lightsaber attached to the end-effector to the planning scene as an attached collision object.reset_allies
(std_srvs.srv.Empty): update the position of the allies in the planning scene to current
Type: ament_python
This is an API for the ROS2 MoveIt MoveGroup node in Python.