Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] Graceful termination of background threads in LlamaV2 #458

Merged
merged 3 commits into from
Sep 26, 2023

Conversation

akhoroshev
Copy link
Contributor

This is a fix for LlamaV2's infinite destructor call.

A bug in the cuda allocator (in destructor) has also been fixed.

@akhoroshev akhoroshev changed the title Graceful termination of background threads in LlamaV2 [feature] Graceful termination of background threads in LlamaV2 Sep 23, 2023
@lvhan028 lvhan028 requested a review from lzhangzz September 24, 2023 12:38
@@ -188,7 +193,9 @@ class Allocator<AllocatorType::CUDA>: public IAllocator {
{
TM_LOG_DEBUG(__PRETTY_FUNCTION__);
while (!pointer_mapping_->empty()) {
free((void**)(&pointer_mapping_->begin()->first));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. it is UB to change key of unordered map
  2. memory type information (host or device) was missed

@lvhan028 lvhan028 self-requested a review September 25, 2023 02:55
Copy link
Collaborator

@lvhan028 lvhan028 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lvhan028 lvhan028 merged commit 0cc667e into InternLM:main Sep 26, 2023
3 checks passed
@lvhan028 lvhan028 mentioned this pull request Sep 26, 2023
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants