-
Notifications
You must be signed in to change notification settings - Fork 4
Motion detection
Motion node is state dependent. For this reason, in order to proceed to it's execution two steps are required. First of all it is necessary to initialize node with an appropriate roslaunch command and second to change to one of the appropriate states with a rosrun* command. This node is responsible to identify motion in a scene and works with one camera. This camera can be either captured from Kinect, or Xtion or a simple usb web camera. The corresponding laucher to initiate motion node allows to choose the camera to capture from.
You initiate the execution by running:
roslaunch pandora_vision_motion pandora_vision_motion_node_standalone.launch [option]
Argument option:
- With a Kinect plugged in your computer:
openni:=true
- With a Xtion plugged in your computer:
openni2:=true
- With a usb camera plugged in your computer:
usbcamera:=true
You initiate the execution by running:
roslaunch pandora_vision_motion pandora_vision_motion_node.launch
The current node begins it's execution, only if robot state is changed either to MODE_SENSOR_HOLD or MODE_SENSOR_TEST. After the initialization of motion node, we change state by running:
rosrun state_manager state_changer state
Argument state:
- state = 4 corresponds to MODE_SENSOR_HOLD
- state = 7 corresponds to MODE_SENSOR_TEST
Run rosrun rqt_reconfigure rqt_reconfigure
and choose pandora_vision
-> motion_node
.
From there you can view:
- the input Depth image, choosing
depth_node
->show_depth_image
- the input RGB image, choosing
rgb_node
->show_rgb_image
- holes found by the Depth and RGB nodes, choosing
hole_fusion_node
->show_respective_holes
- holes found by the Depth and RGB nodes and the merges between them that are considered valid, choosing
hole_fusion_node
->show_valid_holes
and - unique holes considered most valid, choosing
hole_fusion_node
->show_final_holes