You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your excellent work! I have two questions I’d like to discuss with you:
I encountered a similar issue with the MTR model not achieving the expected results on the unScenes dataset. From previous responses in the issues, I understand that I need to set two configurations:
max_data_num to -1
--split to options: prediction_split = ["mini_train", "mini_val", "train", "train_val", "val"]
Following this, I generated three sub-datasets using train, train_val, and val. Specifically:
For training, I used train and train_val and set the config.yaml file as follows:
# exp setting
exp_name: 'test'
ckpt_path: null
seed: 42 # random seed
debug: True # debug mode, will use cpu only
devices: [1, 2] # gpu ids
# data related
load_num_workers: 0 # number of workers for loading data
train_data_path: ["/data/unScenes/result_train/"] # list of paths to the train data
val_data_path: ["/data/unScenes/result_train_val/"] # list of paths to the train_val data
max_data_num: [-1] # maximum number of data for each training dataset
past_len: 21 # history trajectory length, 2.1s
future_len: 60 # future trajectory length, 6s
object_type: ['VEHICLE'] # object types included in the training set
line_type: ['lane', 'stop_sign', 'road_edge', 'road_line', 'crosswalk', 'speed_bump'] # line types to consider in input
masked_attributes: ['z_axis', 'size'] # attributes to mask in input
trajectory_sample_interval: 1 # trajectory sample interval
only_train_on_ego: False # only train on AV
center_offset_of_map: [30.0, 0.0] # map center offset
use_cache: False # enable data loading cache
overwrite_cache: False # overwrite cache if exists
store_data_in_memory: False # store data in memory
# official evaluation
nuscenes_dataroot: "/data/sets/nuscenes/"
eval_nuscenes: False # evaluate with nuscenes evaluation tool
eval_waymo: False # evaluate with waymo evaluation tool
defaults:
- method: MTR
For evaluation, I used val with the config.yaml file set as follows:
# exp setting
exp_name: 'test'
ckpt_path: '/my_own_model.ckpt'
seed: 42 # random seed
debug: True # debug mode, will use cpu only
devices: [1, 2] # gpu ids
# data related
load_num_workers: 0 # number of workers for loading data
val_data_path: ["/data/unScenes/result_val/"] # list of paths to val data
max_data_num: [-1] # maximum number of data for each dataset
past_len: 21 # history trajectory length, 2.1s
future_len: 60 # future trajectory length, 6s
object_type: ['VEHICLE'] # object types in training set
line_type: ['lane', 'stop_sign', 'road_edge', 'road_line', 'crosswalk', 'speed_bump'] # line types in input
masked_attributes: ['z_axis', 'size'] # attributes to mask in input
trajectory_sample_interval: 1 # trajectory sample interval
only_train_on_ego: False # only train on AV
center_offset_of_map: [30.0, 0.0] # map center offset
use_cache: False # data loading cache
overwrite_cache: False # overwrite cache if exists
store_data_in_memory: False # store data in memory
# official evaluation
nuscenes_dataroot: "/data/sets/nuscenes/"
eval_nuscenes: False # evaluate with nuscenes evaluation tool
eval_waymo: False # evaluate with waymo evaluation tool
defaults:
- method: MTR
However, I’m a bit confused by your previous response mentioning that “we have not touched the ‘train_val’ split.”
What's more our MTR model is still not achieving the expected results in the paper. Could you clarify if there’s an issue with my current setup or are there any other adjustments I need to make?
When attempting to use the official evaluation, I set eval_nuscenes to True and pointed nuscenes_dataroot to the raw unScenes data, but encountered an error:
File "/UniTraj/unitraj/models/base_model/base_model.py", line 157, in compute_official_evaluation
'instance': input_dict['scenario_id'][bs_idx].split('_')[1],
IndexError: list index out of range
After inspection, I found the issue is due to scenario_id[0] returning only ['scene-0233'] without the expected additional parts for ‘instance’ (scenario_id.split('')[1]) and ‘sample’ (scenario_id.split('')[2]). It appears this stems from the unified data format used in the Unitraj batch dictionary. Could you advise if there’s an adjustment I might be missing here?
Thank you very much for your assistance!
Let me know if you need any further adjustments!
The text was updated successfully, but these errors were encountered:
nuscens has 3 splits: train, train_val, and val. We train MTR on train and validate on val. There are 32k samples in train and 9k samples in val, can you double check this?
nuscens has 3 splits: train, train_val, and val. We train MTR on train and validate on val. There are 32k samples in train and 9k samples in val, can you double check this?
Sorry, Alan.When I'm using Unitraj to preprocess nuscenes of 'train_val', it only has 1136 scenes, which only contains about 1136 trajectories(far away from 32k).I'm using default config file.Could you please help me what's the problem?
Does 32k trajectories contain surrounding agents of focal agent in a scenario?I found we only train focal agent so if it's correct I only train 1136 trajectories for nuscenes dataset?Some experiments' result may confirme my suppose——(I got almost same brier-fde6 3.51 as paper).Looking forward for your reply!
Dear @Alan-LanFeng,
Thank you for your excellent work! I have two questions I’d like to discuss with you:
Following this, I generated three sub-datasets using train, train_val, and val. Specifically:
However, I’m a bit confused by your previous response mentioning that “we have not touched the ‘train_val’ split.”
What's more our MTR model is still not achieving the expected results in the paper. Could you clarify if there’s an issue with my current setup or are there any other adjustments I need to make?
After inspection, I found the issue is due to scenario_id[0] returning only ['scene-0233'] without the expected additional parts for ‘instance’ (scenario_id.split('')[1]) and ‘sample’ (scenario_id.split('')[2]). It appears this stems from the unified data format used in the Unitraj batch dictionary. Could you advise if there’s an adjustment I might be missing here?
Thank you very much for your assistance!
Let me know if you need any further adjustments!
The text was updated successfully, but these errors were encountered: