-
Notifications
You must be signed in to change notification settings - Fork 3
Home
Welcome to the strands_karl wiki!
Here are some todos we need to work on. For each of them we already have a header with some pointers showing you where to look for info or what to try first. If you got the info from some other site, also put a link there, so we know where to check if something stops working. Please to try do them in the listed order, as this list is sorted based on easiness and importance.
Start up the robot, like for the marathon, but without the reporter.
- Update the system with
sudo apt-get update
andsudo apt-get dist-upgrade
- Run new tmux file:
~/overlay/src/strands_karl/scripts/karl_bringup.bash
also available here (look here for original)
Running xtions without the openni_wrapper
The wrapper we currently use hides certain parameters of the xtion cams. These params can be used to fine-tune the xtion output and may come in handy. To enable their usage, we can use the standard openni2_launch
package and run roslaunch openni2_launch openni2.launch camera:=head_xtion rgb_frame_id:=/head_xtion_rgb_optical_frame depth_frame_id:=/head_xtion_rgb_optical_frame publish_tf:=false
on bruxelles
and roslaunch openni2_launch openni2.launch camera:=chest_xtion rgb_frame_id:=/chest_xtion_rgb_optical_frame depth_frame_id:=/chest_xtion_rgb_optical_frame publish_tf:=false
on amsterdam
. The params are then available from rosrun rqt_reconfigure rqt_reconfigure
.
Get people perception running. I.e. make sure you can see the output of the upper body detector and the output of the tracker.
- Run:
roslaunch perception_people_launch people_tracker_robot.launch load_params_from_file:=true
which will launch multiple nodes fromstrands_perception_people
. Note that launching just theupper_body_detector
does not seem to do the job, i.e. the node is running and advertising the detection topic, but not broadcasting any messages. - To show the upper body detector output: run rviz and display the PointCloud2 from head xtion and the
/upper_body_detector/marker_array
topic which contains the detector output. In Rviz, the tracker output is visualized in the form of green cube around the person's shoulders. - The tracker output is advertised on
/people_tracker/marker_array
Note that karl_navigation has to be launched to get the tracker output.
Debug the no-go map problem[issue closed]. Apparently we cannot have a nogo map and a normal costmap at the same time, check also the original issue.
Keep track of strands_movebase#27 and strands_movebase#30 and see if something comes up:
- strands_movebase#30 is an update that contained updated params for strands_movebase. We tested the new params by having Carl go to the FreedomLookout (where he always used to fail) and he got there! The pull request was therefore merged.
Playback data from the mongodb, right now there are video streams and 3d meta room stored. We want to retrieve this data so we can work with.
- How to play back the video:
Run rosrun mongodb_store mongodb_play.py /head_xtion_compressed_rgb_compressed /head_xtion_compressed_depth_libav
to play all the entries from the mongodb store, chronologically ordered. You can also use parameters specifying the start and end Datetime
s of the playback like so: rosrun mongodb_store mongodb_play.py -s "21/11/14 22:30" -e "21/11/14 22:32" /head_xtion_compressed_rgb_compressed /head_xtion_compressed_depth_libav
.
NOTE that the -s and -e params will only be available if Vojta's pull request is merged.
To be able to visialize the depth image in Rviz, run rosrun image_transport republish libav in:=/head_xtion_compressed_depth _image_transport:=libav raw out:=/head_xtion/depth/image_raw
. To see the video and depth map in Rviz, open it up and add two 'Image' blocks and have them listen to the two topics.
If you want to see on what dates there are some entries in the mongodb, check out the roslog
database with the mongo shell and run db.head_xtion_compressed_depth_libav.find({ "_meta.inserted_at": { $gte: new ISODate("2014-11-28T22:13:01.31Z") }}, {"_meta.inserted_at": 1})
- this will find all entries after or on the specified date and return only the dates of the matching records.
- How to get the pcl pointclouds from the meta room, both intermediate and complete clouds. (Check here for initial info.)
Data replication from Karl to our Server. (For this we need to setup the server.) All the things are stored in a mongodb on the main pc, but this is (and will always again be) full so we need to move the data to another server.
- Clear up space from the robot by dumping the database.
- Create a new map. (if needed)
- Setup waypoints at useful positions. (We will talk about this.)
- Create a task that will go to (some of) the waypoints and records some information about humans there, as in a video stream, including the output of the upper body detector and the tracker, with time and location.
Currently the bounding boxes are published in 3D coordinates. Some people need them in 2D image coordinates. This info is being computed already, but it needs to be published to a topic as well. Check out this.
Setting the forward_point_distance
parameter to 0 improves how the robot is turning on the spot. Instead of oscillating left and right in some situations, it makes a full ~180 deg. turn. Also, increasing angular_sim_granularity
to about 0.4 seems to make the robot on-the-spot rotations a bit less jerky.
[Work toward associating MDL tracker and UBD output] (https://github.com/strands-project/strands_perception_people/pull/148)
When opening the MDL tracker source code in eclipse, it is not hard to see how the information from UBD messages is stored and passed around. If you want to dig into the code, the IDE will help with quick navigation, refactoring and searching for references (ctrl+g)
.
To associate output of MDL and UBD, the changes were made to Detections.h(.cpp)
where all the UBD detections are stored, including the sequence nr. from UBD message and their index (index in the arrays within UBD messages). Next, changes are in Hypo.h(.cpp)
which is the data structure whose content is ultimately sent out as a message. The task, simply put, is to get the right sequence nrs. and indices from Detections
to Hypo
. Storing the sequence numbers and indices works well (as far as I can tell), the hard part is where in code to retrieve them right. For the retrieval of those pieces of information, you need two int
s: frame number and detection number (within the frame).
The method you want to take a look at when trying to associate UDB messages to MDL output, is Tracker::process_frame(...)
. There, the methods extend_trajectories
and make_new_hypos
are called. Later on there is code which is "Fixing the IDs"
Within extend_trajectories
and make_new_hypos
, a kalman filter is used that, I suppose, should tell me the index of the detection within a frame which is used to extend/create a hypo (hypothesis). However, after editing the code several times, I can't get the output look right (see the linked issue for example problems). Be warned, the entire code has lack of comments/documentation and variable names that don't explain their purpose.
Good luck :)
- ros.org has a bunch of good tutorials that are even useful to go through when you had been working with ros for some time, just to refresh your knowledge
- rviz can display almost anything (more than I thought, at least)
- terminator with several windows in one tab can make things a lot easier to observe