Replies: 2 comments 4 replies
-
That makes sense to me. The klib would have to parse the "src" parameters from the "download" and "download_env" configuration, and if there is one or more tokens identified by the
|
Beta Was this translation helpful? Give feedback.
-
some of this is ok but I'm not comfortable w/enabling a binary distribution like this; nanos instances aren't meant to be "upgraded" - they're meant to be replaced w/upgrades and the fewer loopholes that people have to inject things the better |
Beta Was this translation helpful? Give feedback.
-
INTRO
cloud_init klib, among other functionalities, upon instance start, has the ability to:
json
output from a static and predefined url(upon initial image creation), and set environment variables that can be used later on by the program.Now, if you want to deploy another instance of the same service, but with configuration (as an example) that differs from
serviceA-01_env.json
, the rationale is to create a new image with updated configuration (serviceA-02_env.json
) and deploy new instance based of that image.So
cloud_init
config might look likeif you need another one, you repeat the process
serviceA-03_env.json
->image
->instance
, and that's fine and manageable, but becomes a bit annoying when the number of instances increases, since for each instance you need a dedicated image (built and upload) .So when/if the time comes to upgrade Nanos itself (bug fix, improvements, ...) you have to repeat the
image build
->image upload
process for every instance you want to update. Again, this is doable and manageable most of the times.In this simplified case scenario, the only reason we need a dedicated image for each instance, is related to the static cloud_init configuration.
Yes, under certain conditions, there are workarounds to use a single image and provide dynamic data from the same url that one can implement on the provisioner (things like: detect client ip on the request, find instance id/name/hostname and that might be enough for you, if not query metadata server and get some value(s) (that can be updated) to help you decide and so on...). You could even use a provisioning binary, downloaded from
cloud_init
that could take care of downloading the "real" program and/or file configs from the provisioner server, since it can query the internal metadata server it has a lot of information to use, but imho it's more annoying/complicated than it needs to be.Proposal
what if we could extend
cloud_init
with some functionality that can help and provide some dynamic values that can be used ondownload
anddownload_env
attributes (initially).So the general naive idea is smth like this, where in this case we query the gcp metadata server for custom metadata items and store the output value to a
dst
key (basically a kv) and then use that to dynamically replace the key pattern on thedownload
anddownload_env
src
url(s) at least.It might looks similar to this, but needs to be discussed and agreed upon:
Beta Was this translation helpful? Give feedback.
All reactions