Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panmap crashing when running on scenenet #74

Open
anja-sheppard opened this issue Nov 22, 2024 · 2 comments
Open

panmap crashing when running on scenenet #74

anja-sheppard opened this issue Nov 22, 2024 · 2 comments

Comments

@anja-sheppard
Copy link

Hi @Schmluk !

I'm looking to run panoptic_mapping on scenenet. I've generated a bag file, ensured the depth images are of 32FC1 encoding, and heavily modified the config file in order to get this to work. However, no mesh is generated or published and I get a continuous TF error. Any tips on debugging are greatly appreciated.

The error is:

[ERROR] [1732290266.888969457, 9.149315334]: Error getting TF transform from sensor data: Lookup would require extrapolation 0.140940167s into the future.  Requested time 9.149315334 but the latest data is at time 9.008375167, when looking up transform from frame [scenenet_camera_frame] to frame [world]

The scenenet bag has a weird format--all messages are published at a rate of 1Hz, including the TFs. In order to account for this, I changed a lot of parameters in the config file, resulting in:

# General Parameters
verbosity: 2
input_point_cloud: false
use_lidar: &use_lidar false 
use_range_image: &use_range_image true
estimate_normal: &use_normal true
use_threadsafe_submap_collection: false
use_transform_caching: true
use_tf_transforms: true
transform_lookup_time: 1.0
global_frame_name: world
robot_frame_name: scenenet_camera_frame
robot_mesh_interval: 1.0
robot_mesh_scale: 2.0
visualization_interval: -1.0
data_logging_interval: -1.0
print_timing_interval: 0.0
max_input_queue_length: 100
ros_spinning_threads: 8
msg_latency_s: 1.0
check_input_interval: 1.0

# Camera Parameters (based on SceneNet dataset)
camera:
  verbosity: 1
  width: 320  # Px
  height: 240
  vx: 160  # x offset of the principal point
  vy: 120  # y offset of the principal point
  fx: 277.13
  fy: 289.71
  max_range: 5  # m, based on typical RGBD cameras
  min_range: 0.1  # m, close range for depth sensors
  distortion_model: "plumb_bob"
  D: [0.0, 0.0, 0.0, 0.0, 0.0]  # distortion coefficients (no distortion)
  K: [277.1281292110204, 0.0, 160.0, 0.0, 289.7056274847714, 120.0, 0.0, 0.0, 1.0]  # Camera matrix
  R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]  # Rotation matrix
  P: [277.1281292110204, 0.0, 160.0, 0.0, 0.0, 289.7056274847714, 120.0, 0.0, 0.0, 0.0, 1.0, 0.0]  # Projection matrix
  binning_x: 0
  binning_y: 0
  roi:
    x_offset: 0
    y_offset: 0
    height: 0
    width: 0
    do_rectify: False

# ID Tracker Settings
id_tracker:
  type: single_tsdf
  verbosity: 1
  
  use_lidar: false
  use_range_image: true
  use_class_layer: false
  use_instance_classification: false
  
  # Map Allocator
  voxel_size: 0.05
  truncation_distance: 4.0
  voxels_per_side: 16
  
  # Freespace ESDF Mapping
  esdf_integrator_name: voxfield
  esdf_fiesta_patch: true
  esdf_fiesta_break: false
  finer_esdf_on: true
  fixed_band_esdf_on: true
  occ_voxel_size_ratio: 0.866
  gradient_sign: 1.0
  fixed_band_ratio: 2.0
  esdf_max_distance_m: 2.0
  esdf_local_offset_x: 30
  esdf_local_offset_y: 30
  esdf_local_offset_z: 10

# Submap Allocator Settings
submap_allocator:
  type: semantic  # semantic
  verbosity: 5
  
  # Submap Configuration
  voxels_per_side: 32 # Number of voxels per side in the submap (you can adjust based on scene size)
  truncation_distance: -2  # Negative value means number of voxels
  use_class_layer: true
  
  # Voxel Sizes for Different Object Scales
  small_instance_voxel_size: 0.1  # 2 cm resolution for small objects
  medium_instance_voxel_size: 0.1  # 3 cm resolution for medium objects
  large_instance_voxel_size: 0.1  # 4 cm resolution for large objects
  background_voxel_size: 0.1  # 5 cm for background objects
  unknown_voxel_size: 0.1  # 5 cm for unknown objects
  
freespace_allocator:
  type: monolithic  # Single large free space map
  verbosity: 1
  voxels_per_side: 32  # Same as in submap allocator
  truncation_distance: -2
  voxel_size: 0.1  # 30 cm resolution for free space

# TSDF Integrator Settings
tsdf_integrator:
  type: projective 
  verbosity: 3
  
  # Projective Integration Settings
  use_weight_dropoff: true
  max_weight: 10000
  interpolation_method: adaptive  # Adaptive interpolation method for smoother results
  foreign_rays_clear: true
  integration_threads: 8
  
  # Class-based Projective Settings
  use_binary_classification: true
  update_only_tracked_submaps: true

# Map Management
map_management:
  type: submaps
  verbosity: 6
  
  # Management Frequency
  prune_active_blocks_frequency: 10
  activity_management_frequency: 10
  change_detection_frequency: 15
  
  # Behavior on Submap Deactivation
  merge_deactivated_submaps_if_possible: true
  apply_class_layer_when_deactivating_submaps: false
  
  activity_manager:
    required_reobservations: 1  # Submaps require this many observations to be kept
    deactivate_after_missed_detections: 10  # Deactivate after this many missed detections

# Visualization Settings
visualization:  
  submaps:
    type: submaps  # Visualization type (submaps, single_tsdf)
    verbosity: 1
    color_mode: submaps  # Color modes (submaps, instances, color, etc.)
    visualization_mode: active  # Active submaps only
    submap_color_discretization: 20  # Number of color levels for submap visualization
    visualize_mesh: true  # Visualize the mesh of submaps
    visualize_tsdf_blocks: true  # Visualize TSDF blocks
    visualize_free_space: true  # Visualize free space areas
    visualize_bounding_volumes: true  # Visualize bounding volumes

  planning:
    verbosity: 1
    visualize_slice: true
    slice_resolution: 0.1  # Resolution of slices in visualization
    slice_height: -0.5  # Height of slice for visualization
  
  tracking:
    verbosity: 1
    visualize_tracking: true  # Track and visualize object movements

So, it surprises me that the TF error is still happening, if the TF tolerance has been increased. Any help is greatly appreciated!

@anja-sheppard
Copy link
Author

Here is the bag file, if that would be helpful!

https://drive.google.com/file/d/1m5sdea-20oieAAQJIptiT5gt31BbqAU_/view?usp=sharing

@Schmluk
Copy link
Collaborator

Schmluk commented Nov 22, 2024

Hi @anja-sheppard,
The problem (as stated in the error message) seems to lie with TF. Make sure all your tfs are published (ideally before or with the image data), system and sim time are synced and published correctly and time stamps of tfs are correct (this may be where your error is coming from) and all your frames are correct. Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants