Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JobMaster occurs memory leak problems when running too many distributedLoad job #18635

Open
liiuzq-xiaobai opened this issue Jun 26, 2024 · 1 comment
Labels
type-bug This issue is about a bug

Comments

@liiuzq-xiaobai
Copy link

Alluxio Version:
2.9.3

Describe the bug
After submitting a large number of distributedLoad jobs in production environment, job master has a memory leak problem and finally cause OOM.
企业微信截图_43d4d7b1-5565-469c-84d6-8e8c199a062a

To Reproduce
1.Set up one alluxio cluster,1 master, 3 workers.
2.Mock a large number of small files in underFileSystem
3.Submit a large number of distributedLoad jobs.Notice:Take the batchsize=1 as the loading args.
4.Observe the memory changes and gc in JobMaster.

Expected behavior
The memory size continues to increase until the maximum memory size is reached,finally causing the OOM problem.

Urgency
yes

Are you planning to fix it
yes

Additional context
The cause of this bug is that the residual job information in mInfoMap is not deleted.
image
image

@liiuzq-xiaobai
Copy link
Author

liiuzq-xiaobai commented Jun 28, 2024

After verifying on the test environment(alluxio.job.master.job.trace.retention.time=5m),the memory usage of mInfoMao is reduced
image

alluxio-bot pushed a commit that referenced this issue Jul 8, 2024
fix:fix job-master leak memory when submitting a large number of distributed jobs(DIST_LOAD/DIST_CP/Persist jobs)

### What changes are proposed in this pull request?

Start a periodic thread to clear expired jobs information that cannot be trace by the client in CmdJobTracker.The default retention time is 1day,which is the same configuration as LoadV2.

### Why are the changes needed?

When many jobs are submitted,the job master finally will have an oom problem, we can find that the cmdJobTracker retains the residual job information and not cleaned regularly, resulting in memory leaks.

### Does this PR introduce any user facing changes?

Please list the user-facing changes introduced by your change, including
1.add Configuration:
          alluxio.job.master.job.trace.retention.time=xx,the default value is 1d.

Related issue: #18635
			pr-link: #18639
			change-id: cid-d4e5853a1818a22c8a0411a27bfe1141c6f24ebd
alluxio-bot pushed a commit that referenced this issue Jul 8, 2024
fix:fix job-master leak memory when submitting a large number of distributed jobs(DIST_LOAD/DIST_CP/Persist jobs)

### What changes are proposed in this pull request?

Start a periodic thread to clear expired jobs information that cannot be trace by the client in CmdJobTracker.The default retention time is 1day,which is the same configuration as LoadV2.

### Why are the changes needed?

When many jobs are submitted,the job master finally will have an oom problem, we can find that the cmdJobTracker retains the residual job information and not cleaned regularly, resulting in memory leaks.

### Does this PR introduce any user facing changes?

Please list the user-facing changes introduced by your change, including
1.add Configuration:
          alluxio.job.master.job.trace.retention.time=xx,the default value is 1d.

Related issue: #18635
			pr-link: #18639
			change-id: cid-d4e5853a1818a22c8a0411a27bfe1141c6f24ebd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type-bug This issue is about a bug
Projects
None yet
Development

No branches or pull requests

1 participant