-
clone the project to
catkin_ws/src/
-
catkin_make
in workspace root -
roslaunch pr2_robot pick_place_project.launch
-
rosrun pr2_robot perception_main.py
-
output yaml in directory
pr2_robot/yaml_outputs
-
Extract features and train an SVM model on new objects (see
pick_list_*.yaml
in/pr2_robot/config/
for the list of models you'll be trying to identify). -
Write a ROS node and subscribe to
/pr2/world/points
topic. This topic contains noisy point cloud data that you must work with. -
Use filtering and RANSAC plane fitting to isolate the objects of interest from the rest of the scene.
-
Apply Euclidean clustering to create separate clusters for individual items.
-
Perform object recognition on these objects and assign them labels (markers in RViz).
-
Calculate the centroid (average in x, y and z) of the set of points belonging to that each object.
-
Create ROS messages containing the details of each object (name, pick_pose, etc.) and write these messages out to
.yaml
files, one for each of the 3 scenarios (test1-3.world
in/pr2_robot/worlds/
). See the exampleoutput.yaml
for details on what the output should look like. -
Submit a link to your GitHub repo for the project or the Python code for your perception pipeline and your output
.yaml
files (3.yaml
files, one for each test world). You must have correctly identified 100% of objects frompick_list_1.yaml
fortest1.world
, 80% of items frompick_list_2.yaml
fortest2.world
and 75% of items frompick_list_3.yaml
intest3.world
. -
Congratulations! Your Done!
- not yet implemented
Rubric Points
- Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf.
-
voxel-downsampling, pass-through workspace extraction, RANSAC worspace extraction is implemented in file:
a.
pr2_robot/script/pcl_processor.py: 14 downsample_pcl()
b.
pr2_robot/script/pcl_processor.py: 20 workspace_filter()
c.
pr2_robot/script/pcl_processor.py: segment_table_objects()
-
Euclidean clustering segmentation is implemented in file (
pr2_robot/script/pcl_processor.py: 43 cluster_objects
) -
SVM training is implemented in:
pr2_robot/notebook/cloud_recognition.ipynb
-
the ros-node in implemented in
pr2_robot/script/perception_main.py: 163 main()
-
feature extraction function is implemented in
pr2_robot/script/pcl_processor.py: 129 extract_features()
-
svm inference is implemented in
pr2_robot/script/pcl_processor.py: 145 class RecognitionModel
-
use
cv2.cvtColor(img, cv2.COLOR_RGB2HSV_FULL)
to replace matplotlib hsv implementation to get much faster feature extraction speed -
I modified
capture_feature.py
script to dump the raw cloud and norm data to be able to experiment feature extraction algorithm faster (without going through feature capture every time), the new script is included inpr2_robot/script/capture_feature.py
-
The consistency between training and application is improved by explicitly adding the voxel down-sampling step to
capture_feature.py
. this consistency ensured the cross-validation accuracy correctly reflect the accuracy in application setting. (the outlier remover however is not added since it is not locally applied and may corrupt the single item point cloud) -
the dataset used in training is much larger than the default setting --- 150 examples per-item in 8 classes; the large dataset prevents overfitting and improves the robustness of the classifier
-
the feature extraction is improved by setting the norm bin size to 2 (to achieve better orientation invariance for linear model) and color histogram bin size to 64. The training and feature extraction code is in
pr2_robot/notebook/cloud_recognition.ipynb
-
the final svm classifier on full object collection (8 classes) achieves accuracy score: 0.97
-
the system robustly recognize all the objects in 3-worlds
a. world 1 perception
b. world 2 perception
c. world 3 perception