-
Notifications
You must be signed in to change notification settings - Fork 3
Running
There are four premade run configurations for running the code. Each configuration is intended for either a live or simulated environment and is provided with either real data or fake data.
The (live, real) configuration is for running on a live system, A set of ROS nodes for translating between JPL and ROS message types should be running to provide robot state and control. Sensor data from the hardware sensors should also be available.
The (live, fake) configuration is for running the code as if it were on a live system. The JPL controllers are substituted with surrogates to provide fake robot state and control. No sensor data is provided.
The (sim, real) configuration is for running a Gazebo simulation of the Roman. Robot state, control, and sensor data are all provided by Gazebo.
The (sim, fake) configuration is a low-fidelity simulation of the Roman. Fake robot state is provided. Control is provided by teleporting the robot's joints. No sensor information is provided.
Each of these configurations has a corresponding GNU screen configuration file located in rcta_ws/scripts/screen_(live|sim)_[fake]. You can start a screen session for the corresponding configuration via:
screen -c <config>
These setup files will not source rcta_ws/devel/setup.bash, so you need to source them before running screen.
NOTE: The accelerator key specified for all these configurations is C-x. The general options passed to screen may be modified by editing, you can change it in rcta_ws/scripts/screen.common.
After you've run screen and are attached to the session, you can fire off each of the commands stuffed into the various tabs in mostly-left-to-right order. If you want to command the robot through RViz, you probably need to fire off the rviz command last (it wants the planner to be running at startup to query available planners and such).
Log in to AM1.
ssh rcta@tl1-1-am1
Initialize the Limb(s). Keep the hand unplugged until after the limb and torso have been initialized.
~/init_limb.sh
I only saw this script available on Roman 1. Initialize the limbs/torso on Roman 4 using whatever method is available there.
With the e-stop pressed, plug in the hand. Initialize the hand.
~/init_hand.sh
As above, this script only exists on Roman 1.
Start the Limb(s) and Hand(s).
~/start_limb.sh
And again, this script only exists on Roman 1.
Create a new screen window and start a roscore. This could be done on any machine. This does NOT need to be done if a roscore is already running on the navbox.
source ~/setup_ros.sh
roscore
Create a new screen window in the limb/torso/client screen, and start the jpl<->ros translators. This needs to be done on the machine specified in $RS_LIMB_ROOT/sbin/ipc.cfg (there should be two entries with ROS2JPL and JPL2ROS in the variable names).
source ~/setup_ros.sh
roslaunch roman_client_ros_utils start_roman_ros_translators.launch
Start the reposition and grasping planners (on Roman 4, this was done on AM3, on Roman 1, this was done from a developer laptop).
cd rcta_ws
source rcta_ws/devel/setup.bash # if not already done
export ROS_MASTER_URI and ROS_IP to configure the appropriate roscore and host IP
screen -c scripts/screen_live
The processes do not auto-start (but probably can), so execute them all manually. You don't need to start the rviz window setup by this screen file, but should instead start this on a developer computer, with its ROS_MASTER_URI environment variable pointed to the running roscore.
On the developer computer, run rviz.
cd rcta_ws
export ROS_MASTER_URI and ROS_IP to configure the appropriate roscore and host IP
rosrun rviz rviz -d rcta_ws/config/rcta.rviz
If everything is working correctly, you should see the Roman in RViz, visualized at the state of the robot. There should be two panel plugins added to the RViz window, containing widgets for sending commands to the arm, gripper, and running the reposition planner and grasping pipeline.
The fake simulation configuration lacks control for the gripper, so the same roman_simulator node that runs (live, fake) configuration also runs in this configuration. The other state information provided by the roman_simulator node will be ignored.
An additional node, odom_broadcaster, runs to provide a fake transform from odom_combined -> base_footprint. This is to mimic the (sim, real) configuration where odometry and localization are separate transforms determined by Gazebo and a localization node, respectively.