Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Implement persistent storage and caching layer for cluster metadata #12

Open
1 task done
vladoatanasov opened this issue Nov 1, 2024 · 1 comment
Open
1 task done
Assignees

Comments

@vladoatanasov
Copy link
Collaborator

Contact Details

No response

Description

The broker needs to report metadata about the Kafka cluster via the Metadata API. The information will be accessed regularly and updated occasionally. The proposed design is to use a flat file, like json, protobuf, etc, as a persistent layer and to cache the data in memory. When the process starts, it will load the flat file in memory and will occasionally flush to disk.
When the process detects a graceful termination signal, it should flush the metadata to disk and then exit.

See Kafka documentation for required metadata https://kafka.apache.org/protocol.html. Currently OpenTalaria supports Metadata API Version 8.

Code of Conduct

  • I agree to follow this project's Code of Conduct
@vladoatanasov vladoatanasov self-assigned this Nov 1, 2024
@vladoatanasov vladoatanasov linked a pull request Nov 15, 2024 that will close this issue
@vladoatanasov
Copy link
Collaborator Author

This issue is a prerequisite

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant