Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

4d radar detection has no object output #15604

Open
AuHondung opened this issue Dec 9, 2024 · 7 comments
Open

4d radar detection has no object output #15604

AuHondung opened this issue Dec 9, 2024 · 7 comments

Comments

@AuHondung
Copy link

We appreciate you go through Apollo documentations and search previous issues before creating an new one. If neither of the sources helped you with your issues, please report the issue using the following form. Please note missing info can delay the response time.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04): Linux Ubuntu 20.04
  • Apollo installed from (source or binary): source
  • Apollo version (3.5, 5.0, 5.5, 6.0): 9.0 beta
  • Output of apollo.sh config if on master branch:

Steps to reproduce the issue:

  • I created a new 4D mmw radar driver in apollo and successfully read and parsed the data. At the meantime, I start my INS module to get may car pose info. When I start my perception_radar4d.launch, sometimes the following error will appear:

    [radar4d] E1209 10:50:32.344231 584098 transform_wrapper.cc:277] [mainboard]Can not find transform. 1733466436.933309555 frame_id: world child_frame_id: novatel Error info: Lookup would require extrapolation into the future. Requested time 1733466436933309440 but the latest data is at time 1733466436923275008, when looking up transform from frame [novatel] to frame [world]:timeout

    [radar4d] E1209 10:50:32.344317 584098 radar4d_detection_component.cc:149] [perception]Failed to get pose at time: 1.73347e+09

    Is the 0.015 time threshold in the following code set too small? But I modify it to 0.1, the code error still exists. How can I fixed it?

bool TransformWrapper::QueryTrans(double timestamp, StampedTransform* trans,
                                  const std::string& frame_id,
                                  const std::string& child_frame_id) {
  cyber::Time query_time(timestamp);
  std::string err_string;
  if (!tf2_buffer_->canTransform(frame_id, child_frame_id, query_time,
                                 static_cast<float>(FLAGS_obs_tf2_buff_size),
                                 &err_string)) {
    apollo::transform::TransformStamped latest_transform;
    double latest_buffer_time;
    if (!FLAGS_hardware_trigger) {
      try {
        latest_transform = tf2_buffer_->lookupTransform(
            frame_id, child_frame_id, apollo::cyber::Time(0));
        latest_buffer_time = latest_transform.header().timestamp_sec();
      } catch (tf2::TransformException& ex) {
        AERROR << ex.what();
        return false;
      }
    }
    // In simulation mode, use the latest transform information if query failed.
    if (!cyber::common::GlobalData::Instance()->IsRealityMode()) {
      query_time = cyber::Time(0);
    } else if (!FLAGS_hardware_trigger &&
               (timestamp - latest_buffer_time < 0.015) &&
               (timestamp - latest_buffer_time > 0)) {
      // soft trigger and the latency is within the tolerance range
      query_time = apollo::cyber::Time(0);
    } else {
      AERROR << "Can not find transform. " << FORMAT_TIMESTAMP(timestamp)
             << " frame_id: " << frame_id
             << " child_frame_id: " << child_frame_id
             << " Error info: " << err_string;
      return false;
    }
  }
  • Sometimes there is no above error, the point_pillars detect model has no object output or very few ghost targets. My cfar point data quality I think is pretty good. No matter how I modify the hyperparameters of point_pillars, the model still has no correct object output. How can I improve my code or can you provide radar4d_test_dataset to validate my code?

Supporting materials (screenshots, command lines, code/script snippets):

image
image
image

@Jonathans575
Copy link

1、From the log information, your radar timestamp is larger than the location timestamp. You can modify the code in modules/perception/radar4d_detection/radar4d_detection_component.cc:
const double radar_query_tf_timestamp = timestamp - radar_query_tf_offset_ * 0.001;
if (!radar2world_trans_.GetSensor2worldTrans(radar_query_tf_timestamp, &radar_trans, &novatel2world_trans)) { out_message->error_code_ = apollo::common::ErrorCode::PERCEPTION_ERROR_TF; AERROR << "Failed to get pose at time: " << timestamp; return true; }
2、Recently, we would upload a 4d millimeter wave data record, you can compare this to analyze your code problems.

@AuHondung
Copy link
Author

1、From the log information, your radar timestamp is larger than the location timestamp. You can modify the code in modules/perception/radar4d_detection/radar4d_detection_component.cc: const double radar_query_tf_timestamp = timestamp - radar_query_tf_offset_ * 0.001; if (!radar2world_trans_.GetSensor2worldTrans(radar_query_tf_timestamp, &radar_trans, &novatel2world_trans)) { out_message->error_code_ = apollo::common::ErrorCode::PERCEPTION_ERROR_TF; AERROR << "Failed to get pose at time: " << timestamp; return true; }
2、Recently, we would upload a 4d millimeter wave data record, you can compare this to analyze your code problems.


 const double radar_query_tf_timestamp = timestamp - radar_query_tf_offset_ * 0.001;
 if (!radar2world_trans_.GetSensor2worldTrans(radar_query_tf_timestamp, &radar_trans, &novatel2world_trans)) { 
   out_message->error_code_ = apollo::common::ErrorCode::PERCEPTION_ERROR_TF; 
   AERROR << "Failed to get pose at time: " << timestamp; 
   return true; 
 } 

 Eigen::Affine3d radar2novatel_trans;
 if (!radar2novatel_trans_.GetTrans(radar_query_tf_timestamp, &radar2novatel_trans, "novatel", tf_child_frame_id_)) {
   out_message->error_code_ = apollo::common::ErrorCode::PERCEPTION_ERROR_TF;
   AERROR << "Failed to get radar2novatel trans at time: " << timestamp;
   return true;
 }

...
...
...

 if (!GetCarLocalizationSpeed(radar_query_tf_timestamp,
                              &(preprocessor_options.car_linear_speed),
                              &(preprocessor_options.car_angular_speed))) {
   AERROR << "Failed to call get_car_speed. [timestamp: " << timestamp;
   return false;
 }
  • 1)I modify my code in "modules/perception/radar4d_detection/radar4d_detection_component.cc" , and the value of radar_query_tf_offset_ is set to 15. But this code error still occurs occasionally

[radar4d] E1210 14:54:13.907873 2197414 transform_wrapper.cc:277] [mainboard]Can not find transform. 1733466789.412297726 frame_id: world child_frame_id: novatel Error info: Lookup would require extrapolation into the future. Requested time 1733466789412297728 but the latest data is at time 1733466789404588800, when looking up transform from frame [novatel] to frame [world]:timeout

or

[radar4d] E1210 14:54:18.390743 2197416 transform_wrapper.cc:277] [mainboard]Can not find transform. 1733466793.927217722 frame_id: world child_frame_id: novatel Error info: Lookup would require extrapolation into the future. Requested time 1733466793927217664 but the latest data is at time 1733466793904645120, when looking up transform from frame [novatel] to frame [world]:timeout
[radar4d] E1210 14:54:18.390810 2197416 msg_buffer.h:127] [perception]Your timestamp (1.73347e+09) is newer than the latest timestamp (1.73347e+09).
[radar4d] E1210 14:54:18.390816 2197416 radar4d_detection_component.cc:255] [perception]Cannot get car speed.
[radar4d] E1210 14:54:18.390820 2197416 radar4d_detection_component.cc:178] [perception]Failed to call get_car_speed. [timestamp: 1.73347e+09

By the way, the frame ratio of /pose is about 134-135, the frame ration of /my4DradarPointCloud2 is about 14-14.5, will this frequency difference cause this error? And can you give me some advise on how to synchronize the time between sensors and the pose?
image
image

  • 2)My filterd cfar points are all in the range of X : 0 to 100m, Y : -10 to 10m, Z : -1 to 5m, and I modify /modules/perception/radar4d_detection/lib/detector/point_pillars_detection/preprocess_points_cuda.cu to check whether data point entering the pillar, each frame has about 2500-4000 data points input, but still no detection target output.
    When I modify score_threshold = 0.05, nms_overlap_threshold = 1, it would occasionally have one or two error detection targets. So do I need to train the detection model by my 4Dradar data or validate my code by open datasets? Can you give me some improved method? Thx.

    void PreprocessPointsCuda::DoPreprocessPointsCuda(
      const float* dev_points, const int in_num_points, int* dev_x_coors,
      int* dev_y_coors, float* dev_num_points_per_pillar,
      float* dev_pillar_point_feature, float* dev_pillar_coors,
      int* dev_sparse_pillar_map, int* host_pillar_count) {
    GPU_CHECK(cudaMemset(dev_pillar_count_histo_, 0,
                         grid_y_size_ * grid_x_size_ * sizeof(int)));
    GPU_CHECK(cudaMemset(dev_counter_, 0, sizeof(int)));
    GPU_CHECK(cudaMemset(dev_pillar_count_, 0, sizeof(int)));
    
    int num_block = DIVUP(in_num_points, num_threads_);
    
    make_pillar_histo_kernel<<<num_block, num_threads_>>>(
        dev_points, dev_pillar_point_feature_in_coors_, dev_pillar_count_histo_,
        in_num_points, max_num_points_per_pillar_, grid_x_size_, grid_y_size_,
        grid_z_size_, min_x_range_, min_y_range_, min_z_range_, pillar_x_size_,
        pillar_y_size_, pillar_z_size_, num_point_feature_);
    
    make_pillar_index_kernel<<<grid_x_size_, grid_y_size_>>>(
        dev_pillar_count_histo_, dev_counter_, dev_pillar_count_, dev_x_coors,
        dev_y_coors, dev_num_points_per_pillar, dev_sparse_pillar_map,
        max_num_pillars_, max_num_points_per_pillar_, grid_x_size_,
        num_inds_for_scan_);
    
    GPU_CHECK(cudaMemcpy(host_pillar_count, dev_pillar_count_, sizeof(int),
                         cudaMemcpyDeviceToHost));
    
    // Step 3: Copy dev_pillar_count_histo_ to host memory
    std::vector<int> host_pillar_count_histo(grid_y_size_ * grid_x_size_, 0);
    GPU_CHECK(cudaMemcpy(host_pillar_count_histo.data(), dev_pillar_count_histo_,
                         grid_y_size_ * grid_x_size_ * sizeof(int),
                         cudaMemcpyDeviceToHost));
    
    // Step 4: Print each pillar's point count
    int point_num = 0;
    for (int y = 0; y < grid_y_size_; ++y) {
      for (int x = 0; x < grid_x_size_; ++x) {
        int index = y * grid_x_size_ + x;
        if (host_pillar_count_histo[index] > 0) {
          point_num += host_pillar_count_histo[index];
          // std::cout << "Pillar at (" << x << ", " << y
          //           << ") contains " << host_pillar_count_histo[index]
          //           << " points." << std::endl;
        }
      }
    }
    std::cout << "Total number of pillar points: " << point_num << std::endl;

@hearto1314
Copy link
Contributor

@AuHondung upgrade to the latest code, and set use_gnss_time: false in modules/drivers/gnss/conf/gnss_conf.pb.txt , so that both gnss and sensors use system time.

@Jonathans575
Copy link

Jonathans575 commented Dec 11, 2024

@AuHondung The 4d radar object detection model was trained using vod dataset: https://github.com/tudelft-iv/view-of-delft-dataset, which was collected using a ZF FRGen21 3+1D radar.
So, If the characteristics of radar data differ greatly, the poor detection effect is expected.
modules/perception/radar4d_detection/data/preprocessor_config.pb.txt: rcs_offset can be adjusted to balance out differences in radar reflection strength.
In addition, please pay attention to the difference in the radar coordinate system direction.

@AuHondung
Copy link
Author

@hearto1314 Refer to your method, I fixed my time synchronization problem by changing the message publishing frequency of my positioning device and using the system time, thx.

@Jonathans575 My radar data points coordinate system direction is FLU, and the radar data points are displayed normally on DreamView, is the radar coordinate system correct?
My radar is the same as ZF FRGen21, it is a 12T16R 4D radar. I will compare my data with the VOD dataset to check the characteristics of radar data. And I will try to adjust the rcs_offset to see the output. Any other parameters will affect the model output?
In addition, I really need a record file that records 4D radar and pose data.

@AuHondung
Copy link
Author

@Jonathans575 @hearto1314

  1. I looked up the data structure of the oculli eagle radar, I found the power is SNR, not RCS. So in modules/perception/radar4d_detection/lib/preprocessor/radar_preprocessor.cc , pt.intensity()is SNR of the radar point.
      point.x = pt.x();
      point.y = pt.y();
      point.z = pt.z();
      // rcs offset is set for different radars used for training and testing
      point.rcs = static_cast<float>(pt.intensity() + rcs_offset_);
      // doppler velocity is returned by radar
      point.velocity = static_cast<float>(raw_pt.doppler());
      // compensated velocity of each point needs to be calculated
      point.comp_vel = CalCompensatedVelocity(point, options);
      frame->cloud->push_back(point);

image

  1. So I assign the SNR of my radar point to pt.intensity(), and I adjust the rcs_offset to balance out differences in radar reflection strength. The model's target detection effect is a little better, but the detection effect is still poor.
    (1) When the target is within 20 meters of the vehicle, the target can be detected correctly. Beyond this range, the detection effect is very poor.
    (2) The model detects the oncoming vehicles well, but hardly detect the vehicles in the same direction.

vehicle in the same direction cant be detected
image
vehicle in the opposite direction can be detected, but it has to be within 20m
image

So, is the detection effect of your data also so poor?
If your detect effect is very good, what other factors affect my detect effect?
And do you have any other suggestions for improvement?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants