-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic Consul Registration for Quarkus Applications #944
Comments
Another feature for your list is we should accept a client.registerService(
new ServiceOptions()
.setPort(port)
.setAddress(address)
.setName(name)
.setId("greeting-service")
.setTags(List.of("v1"))
.setMeta(Map.of("version", "v1"))); |
I've been working a bit on this today again. String host = instance.getHost() == null ? "localhost" : instance.getHost(); I'm creating a PR soon for fixing it. So, we can do automatic registration in quarkus when de dependency is present but need the public ip and public port to register via the configuration. For the service name we can default to the application name if not provided. |
@aureamunoz any progress on this? |
Not yet. I'm planning to work on it once a few things are done. |
Enhance the integration with Consul to enable applications to automatically register themselves during startup, even if the application doesn't explicitly know its IP address. Consul will handle the communication, minimizing configuration effort.
When a Quarkus application starts, if the stork-service-registration-consul is present, it should automatically register with Consul, including its IP address, public port, and other metadata. To provide flexibility, we will introduce the following new configuration options:
quarkus.http.port
.quarkus.application.name
.quarkus.uuid
or similar).liveness
endpoint. Note that it may run on a separate management interface, with a different public IP and port, so manual configuration may be needed.For full Docker mode, this will work assuming the private port is exposed on the network.
Consul Failure Management
That would be a request for enhancement for the Vert.x consul client.
Another critical aspect is how the client manages Consul failures. Currently, even in distributed mode, the Consul client is unaware of all Consul instances and doesn't switch to a new leader when a failure occurs.
Proposed solution:
$ANY_CONSUL_SERVER/v1/status/leader
) to find the new leader.To optimize performance, clients should be reused instead of being created every time.
Additional Considerations
cc @cescoffier @melloware @FranckD-Zenika @@FranckDemeyer
The text was updated successfully, but these errors were encountered: