Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I generate BEV features? #8

Open
DrinkLego opened this issue Mar 3, 2025 · 3 comments
Open

How can I generate BEV features? #8

DrinkLego opened this issue Mar 3, 2025 · 3 comments

Comments

@DrinkLego
Copy link

I truly appreciate the excellent work and comprehensive documentation you've provided! I'm encountering an issue regarding BEV feature storage. According to the provided documentation, during evaluation I run the following code to save BEV features:

python tools/test.py \
  projects/configs/maptr/maptr_tiny_r50_24e.py \
  work_dirs/maptr_tiny_r50_24e/YOURCHECKPOINT.pth \
  --eval chamfer \
  --bev_path /path_to_save_bev_features 

However, when executing this code, I observe enormous memory consumption. The process gets killed once memory usage reaches 100%, making it impossible to complete MapTR map evaluation and BEV feature saving. Could you share what memory size you typically use when storing BEV features? Do you have any recommendations for my situation?

The specific error messages are as follows:

[                                                  ] 1/6019, 0.7 task/s, elapsed: 1s, ETA:
  self.post_center_range = torch.tensor(
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 6019/6019, 8.0 task/s, elapsed: 757s, ETA:     0s
Formating bboxes of pts_bbox
Start to convert map detection format...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 6019/6019, 8064.0 task/s, elapsed: 1s, ETA:     0s
/MapBEVPrediction/processed/maptr/nuscenes_map_anns_val.json exist, not update
Results writes to test/maptr_tiny_r50_24e/Tue_Feb_25_11_16_03_2025/pts_bbox/nuscmap_results.json
Evaluating bboxes of pts_bbox
Formating results & gts by classes
results path: 
MapBEVPrediction/MapTR_modified/test/maptr_tiny_r50_24e/Tue_Feb_25_11_16_03_2025/pts_bbox/nuscmap_results.json
Killed
@alfredgu001324
Copy link
Owner

alfredgu001324 commented Mar 3, 2025

Thanks for your interest in our work! Can you maybe put some breakpoints in this file to see whether this line actually works for you?

I also encountered this issue initially, but then decided to use this approach to save the BEV features locally and then delete them from RAM, repeating this process one frame by one frame so that memory is not occupied.

As a reference my workstation has 64GB RAM in total.

@DrinkLego
Copy link
Author

Thank you for your response! I followed your suggestion and put some breakpoints in the file, but found that it didn't work for me. The program doesn't seem to execute this part, and I'm still encountering OOM issues.

I'd like to ask: When running eval.py, does the program first evaluate the map before extracting BEV features, or are these two steps performed simultaneously? During execution, I can't see the map evaluation results before the process gets killed.

Do you have any other suggestions to resolve this issue?

@alfredgu001324
Copy link
Owner

alfredgu001324 commented Mar 7, 2025

Uhmm, weird, this is the actual evaluation code behind test.py, because it is importing it from this line. Which folder are you in for mapping methods? Maybe you need to find the corresponding single_gpu_test function that is executed, try finding all the corresponding single_gpu_test in each mapping method, put some breakpoints and see which one would trigger.

When running eval.py, it follows a sequential process, camera images -> BEV features -> vectorized map. What I did is that besides saving the vectorized map, I also saved the intermediate BEV features. So these two steps are kind of performed simultaneously. The actual map metric evaluation happened only when you finish predicting all the vectorized maps, then it starts to calculate the actual metrics. I think that is why you can't see the map evaluation results before the process get killed.

As a first step, except for trying out the method mentioned in the first paragraph, i.e., finding all the single_gpu_test functions in the repo and see which one would trigger the breakpoint, can you also try evaluating on the mini_val first to see if you have this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants