Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[jsk_perception] Add Lidar person detection Node #2703

Open
wants to merge 24 commits into
base: master
Choose a base branch
from

Conversation

iory
Copy link
Member

@iory iory commented Jun 24, 2022

What is this?

rviz--slash--image.mp4

I added a node that detects a person from a lidar sensor.
This node is used to detect the position of a person and remove it from LaserScan when the robot walks next to the person in parallel.
https://github.com/nakane11/teach_spot/blob/master/launch/human_filter_pr2.launch#L30

For more detail, please see the thesis DR-SPAAM: A Spatial-Attention and Auto-regressive Model for Person Detection in 2D Range Data.

In order to use this feature, you need to install pytorch <https://pytorch.org/get-started/locally/>_ (pytorch >= 1.4.0 is recommended).

Subscribing Topic

  • ~input (sensor_msgs/LaserScan)

    Input laser scan.

Publishing Topic

  • ~output (geometry_msgs/PoseArray)

    Position of detected people.

    Based on the result of tracking, the direction of movement is the x direction.

  • ~output/markers (visualization_msgs/MarkerArray)

    MakerArray of detected people.

    The color of the marker is determined based on the tracking result.

Parameters

  • ~queue_size (Int, default: 1)

    Input queue size.

  • ~conf_thresh (Double, default: 0.8)

    Threshold for confidence.

  • ~weight_file (String, required)

    Trained model's weight file path.

  • ~detector_model (String, default: DR-SPAAM)

    Detector model. Current only DR-SPAAM is supported.

  • ~stride (Int, default: 1)

    Use this to skip laser points.

  • ~panoramic_scan (Bool, default: false)

    Set to true if the scan covers 360 degree.

  • ~gpu (Int, default: -1)

    Index of gpu used for prediction. Set -1 for using CPU.

  • ~max_distance (Double, default: 0.5)

    Threshold for tracking max distance.

    If the position in the previous frame is farther than this distance, it will be excluded from the tracking candidates.

  • ~n_previous (Int, default: 10)

    Determine the moving direction from the previous position and the current position.

  • ~map_link (String, default: None optional)

    If this value is specified, markers are published in ~map_link frame.

  • ~duration_timeout (Double, default: 0.05)

    Duration of timeout for lookup transform in case of specifying ~map_link.

  • ~color_alpha (Double, default: 0.8)

    Alpha value of visualization marker.

  • ~people_height (Double, default: 1.6)

    Height of visualization marker.

  • ~people_head_radius (Double, default: 0.3)

    Head radius of visualization marker.

  • ~people_body_radius (Double, default: 0.3)

    Body radius of visualization marker.

Sample

roslaunch jsk_perception sample_lidar_person_detection.launch

@nakane11
Copy link
Member

nakane11 commented Jun 24, 2022

Thank you. I checked sample program works well.

I think we can see if a person is correctly tracked by assigning the same color as before.
Is it necessary to separate detected and confidence masked results from tracking results?

@iory
Copy link
Member Author

iory commented Jun 25, 2022

Thanks for your feedback.

Is it necessary to separate detected and confidence masked results from tracking results?

I see. It's a good direction. I'll try it.

@iory iory force-pushed the lidar-person-detection branch 2 times, most recently from 69f9b02 to 75699f4 Compare June 25, 2022 19:46
@iory
Copy link
Member Author

iory commented Jun 25, 2022

I changed the tracking method to predict the Kalman filter at the 3D position and track it with the sort algorithm https://arxiv.org/abs/1602.00763.

Briefly, the Kalman filter outputs the position prediction result and tracks it by matching with the currently detected position.

If the detection result is slow, tracking may fail, so it is better to increase the stride or use the gpu option to increase the inference time.

Copy link
Member

@nakane11 nakane11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose these lines can be changed.

iory added 19 commits June 27, 2022 14:23
  * Fixed color scale.

  * Modifed base_link to map_link to track moving object.

  * Use Sort tracking algorithm.
iory and others added 3 commits June 27, 2022 14:23
…on_marker/cylinder.py


[jsk_perception/lidar_person_detection] Refactor initialization of `visualization_msgs/Marker`

Co-authored-by: Aoi Nakane <[email protected]>
…on_marker/sphere.py


[jsk_perception/lidar_person_detection] Refactor initialization of `visualization_msgs/Marker`

Co-authored-by: Aoi Nakane <[email protected]>
@nakane11
Copy link
Member

nakane11 commented Jun 27, 2022

walk_corridor.mp4

I tried lidar_person_detection_node.py with PR1040 and assure it successfully tracked people in the corridor.
I made video with bag_to_video.py in jsk_rosbag_tools package jsk-ros-pkg/jsk_common#1738

@k-okada
Copy link
Member

k-okada commented Nov 14, 2023

@iory

+ catkin_test_results --verbose --all build
Skipping "catkin_tools_prebuild/package.xml": the root tag is neither 'testsuite' nor 'testsuites'
Full test results for 'jsk_perception/test_results/jsk_perception/roslaunch-check_test_lidar_person_detection.test.xml'
-------------------------------------------------
<testsuite errors="0" failures="1" name="roslaunch-check_test_lidar_person_detection.test.xml" tests="1" time="1"><testcase classname="roslaunch.RoslaunchCheck" name="jsk_perception_test_lidar_person_detection_test" status="run" time="1"><failure message="roslaunch check [/workspace/ros/ws_jsk_recognition/src/jsk_recognition/jsk_perception/test/lidar_person_detection.test] failed" type="" /></testcase><system-out>&lt;![CDATA[
[/workspace/ros/ws_jsk_recognition/src/jsk_recognition/jsk_perception/test/lidar_person_detection.test]:
	while processing /workspace/ros/ws_jsk_recognition/src/jsk_recognition/jsk_perception/sample/sample_lidar_person_detection.launch.launch:
Invalid roslaunch XML syntax: [Errno 2] No such file or directory: u'/workspace/ros/ws_jsk_recognition/src/jsk_recognition/jsk_perception/sample/sample_lidar_person_detection.launch.launch'
]]&gt;</system-out></testsuite>
-------------------------------------------------
Full test results for 'jsk_perception/test_results/jsk_perception/MISSING-rostest-test_lidar_person_detection.xml'
-------------------------------------------------
<?xml version="1.0" encoding="UTF-8"?>
<testsuite tests="1" failures="1" time="1" errors="0" name="rostest-test_lidar_person_detection.xml">
  <testcase name="test_ran" status="run" time="1" classname="Results">
    <failure message="Unable to find test results for rostest-test_lidar_person_detection.xml, test did not run.
Expected results in /workspace/ros/ws_jsk_recognition/build/jsk_perception/test_results/jsk_perception/rostest-test_lidar_person_detection.xml" type=""/>
  </testcase>
</testsuite>

sample/sample_lidar_person_detection.launch.launch is missing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants