-
Notifications
You must be signed in to change notification settings - Fork 24
component analysis
This page contains the analysis done for each one of the 5GTANGO components about the changes needed for making them highly available, by using Kubernetes.
We're going to use Kubernetes for deploying our V&V/SP platforms. This implies changes on each existing container at different levels. Some of them we might find that the effort to do such an adaption does not compensate for the gains. This should be the conclusion of such an analysis.
Please try to ask these questions per container to do the analysis:
- Is your service stateless? Most of the REST APIs are stateless.
- If not:
- Can you store the states in an external database?
- If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message?
- Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it?
- Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is your service stateless? NO
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? NO Need to deploy with helm chart https://bitnami.com/stack/rabbitmq/helm
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is it stateless? No
-
Can you store the states in an external database? Yes, it already uses a PostgreSQL DB to store information. Needs to store also the information received from wrappers that are keeped in memory waiting for all the replys from wrappers. If the container goes down this information is lost, and the Status is freezed (new or instanting).
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? If the LB is in front there are'nt problems. Only have problems if the container goes down, because some messages generates a timeout trigger, for when wrappers doesn't respond. The Status is freezed (new or instanting).
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It uses RabbitMQ to communicate with the wrappers an with the mano. 'State' (the correlation ID) is passed in each message: we have to be sure that only one message is sent to the MANO/Wrappers on a request.
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is your service stateless? NO
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? no Need to deploy with helm chart https://github.com/helm/charts/tree/master/stable/influxdb https://www.influxdata.com/blog/influxdb-clustering/
-
Best Deployment strategy? Anti-affinity, Affinity. To which container? Affinity with son-monitor for writing/reading data. Antiaffinity between them.
-
Is your service stateless? NO
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? NO Need to deploy with helm chart https://bitnami.com/stack/mongodb/helm
-
Best Deployment strategy? Anti-affinity, Affinity. To which container? Affinity with son-progress for writing/reading data
-
Is it stateless? Yes, is a REST API application
-
Can you store the states in an external database? Yes, it already uses a PostgreSQL DB to store information.
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? NO
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It doesn't uses RabbitMQ
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is it stateless? Yes
-
Can you store the states in an external database? It's a database
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? It cant be used in that way.
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It doesn't uses RabbitMQ
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is it stateless? No, Prometheus server is stateless but websocket server is not
-
Can you store the states in an external database? Yes, it already uses a PostgreSQL DB to store information.
-
If we have a load balancer on the front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? No, but we must ensure that all instances gather data from all monitoring targets
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? it uses RabbitMQ in order to communicate with SLA, POLICY and MANO.
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is it stateless? Yes
-
Can you store the states in an external database?
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? The issue that all pushgateway instances must be configured as targets in Prometheus container
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It doesn't uses RabbitMQ
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is it stateless? NO
-
Can you store the states in an external database? Yes, it already uses a PostgreSQL DB to store information.
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? NO
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It doesn't uses RabbitMQ
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is your service stateless? NO
-
Can you store the states in an external database? n/a
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? NO Need to deploy with helm chart https://bitnami.com/stack/postgresql/helm
-
Best Deployment strategy? Anti-affinity, Affinity. To which container? Affinity with tng-rep and mongo
- Is it stateless? Currently, yes. We have to design the best integration with User Management, and redo this analysis.
-
Is the service stateless? Yes. The nature of the service is the storage of documents
-
Can you store the states in an external database? Since the aim of tng-cat component is the storage of the received documents while it saves everything in MongoDB, the states are saved in the MongoDB.
-
Does your container uses rabbitmq? No.
-
Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? No
-
Best Deployment strategy? Anti-affinity, Affinity. To which container? None.
-
Is it stateless? No
-
Can you store the states in an external database? Yes. Currently, and while the packager unpackages the package, the ID of the process is kept in memory. We need to store it in a DB (Redis, e.g.)
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? The component supports the following features, which have different behaviours:
- package upload: a unique (UU)ID is generated for every request, and stored in memory (we need to change it to be stored in the Redis DB). Queries on the status of the processing of the package are made to the DB, as well as updates.
- package download: the query expires after time-out;
- package queries: the query expires after time-out;
- service queries: the query expires after time-out;
- function queries: the query expires after time-out;
- root route: the query expires after time-out;
- pings: the query expires after time-out;
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it?
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is it stateless? No
-
Can you store the states in an external database? Yes, it uses a PostgreSQL DB to store the requests.
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? The component supports the following features, which have different behaviours:
- (lifecycle change) requests (service and slice creation and termination) creation: requests are saved (in the PostgreSQL) and messages sent (to RabbitMQ) -- if the container goes down in between these two actions, the request will always stays in the NEW status (add a transaction?);
- requests updating: messages are received (from RabbitMQ) and requests updated -- if the container goes down in between these two actions, the message will be lost, with the request status never being updated (to INSTANTIATING, ERROR or READY) (add a transaction?);
- requests queries: the query expires after time-out;
- placement policies: creation request expires after time-out;
- records queries: the query expires after time-out;
- service queries: the query expires after time-out;
- function queries: the query expires after time-out;
- root route: the query expires after time-out;
- pings: no impact, the query expires after time-out;
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It uses RabbitMQ to communicate with the MANO. 'State' (the correlation ID) is passed in each message: we have to be sure that only one message is sent to the MANO on a request (on the creation request flow). If the update message is read more than once, only the last message to be processed will be saved -- no problem, assuming they'll both be equal.
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
- Is it stateless? Yes
- Is your service stateless? Most of the REST APIs are stateless. NO. Rest requests are stateless but not the way the information that comes from rabbitmq is managed.
- If not:
- Can you store the states in an external database? YES
- If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? Now YES. we change the internal architecture so as to support it.
- Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? YES. we plan to solve it by setting dynamic filters to replicas and lock the info that is already consumed by a replica.
- Best Deployment strategy? Anti-affinity, Affinity. To which container? Affinity with nexus repository that will be a new container where the policy rules will be hosted.
-
Is the service stateless? Yes.
-
Can you store the states in an external database? Yes.
-
Does your container uses rabbitmq? No.
-
Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? No.
-
Best Deployment strategy? Anti-affinity, Affinity. To which container? None.
-
Is your service stateless? YES
-
Best Deployment strategy? Anti-affinity, Affinity. To which container? Affinity with mongo
- Is you service stateless? No! During the unpackaging there are one or more "unpackaging processes".
- Can you store the states in an external database? In principle yes, e.g., in a MongoDB. Problem is that the tool is also used as SDK tool, where we do not have MongoDBs etc. available. So the state externalization has to be implemented only for the case were the packager runs as part of the Gatekeeper. Doable. But effort.
- If we have a load balancer in front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? No, if the state of the running unpackaging processes is externalized, each of the containers cann answer those requests.
- Does your container uses rabbitmq? No.
- Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? No.
- Best Deployment strategy? Anti-affinity, Affinity. To which container? None.
-
Is your service stateless? yes
-
Best Deployment strategy? Anti-affinity, Affinity. To which container? Affinity with tng-portal and tng-api-gtw
-
Is it stateless? No
-
Can you store the states in an external database? Yes, currently it uses a PostgreSQL DB to store the records, so it can be used to store the states as well.
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? No, as far as the request is served by just one replica of the service.
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It uses RabbitMQ to communicate with the MANO and the monitoring manager in order to create the final agreement when the service is instantiated and set the agreement to violated when there is a violation. The issue is, while all the replicas will be consumers of the same queue-topic, i guess the message will be consumed and processed by all the running instances. If rabbitmq doesn't solve this itself, we may need another "ïnside tool" to manage the consumption of the messages by only one instance of our service.
-
Best Deployment strategy? Anti-affinity, Affinity. I do not see any restriction to this.
- Is the service stateless? Not yet, working to give asynchronism with the gtkp to make it stateless.
- Can you store the states in an external database? Yes throught tng-rep/cat.
- Does your container uses rabbitmq? No.
- Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? No, as long as all replicas share all the information saved in the tng-rep/cat components.
- Best Deployment strategy? Anti-affinity, Affinity. To which container?
-
Is it stateless? No
-
Can you store the states in an external database? Yes, it already uses a PostgreSQL DB to store information.
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? If the LB is in front there are'nt problems. Only have problems if the container goes down, because some messages needs to estabilish a connection to the OpenStack client and wait for some processes like deploying/quering/removing. The Status is freezed (new or instanting or removing).
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It uses RabbitMQ to communicate with the ia-nbi. 'State' (the correlation ID) is passed in each message: we have to be sure that only one message is sent to the ia-nbi on a request.
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
- Is it stateless? Yes If the container goes down when process an message, and don't respond to the ia-nbi, this can cause that Status is freezed (new or instanting).
-
Is it stateless? No
-
Can you store the states in an external database? Yes, it already uses a PostgreSQL DB to store information.
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? If the LB is in front there are'nt problems. Only have problems if the container goes down, because some messages needs to estabilish a connection to the SFC client (ovs) and wait for some processes like configuring and deconfiguring. The Status is freezed.
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It uses RabbitMQ to communicate with the ia-nbi. 'State' (the correlation ID) is passed in each message: we have to be sure that only one message is sent to the ia-nbi on a request.
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?
- Is it stateless? Yes If the container goes down when process an message, and don't respond to the ia-nbi, this can cause a timeout error generated by the ia-nbi.
-
Is it stateless? No
-
Can you store the states in an external database? Yes, it already uses a PostgreSQL DB to store information.
-
If we have a load balancer on front of your service and we have 3 replicas UP. Is there any issue to which container receive the message? If the LB is in front there are'nt problems. Only have problems if the container goes down, because some messages needs to estabilish a connection to the VTN client and wait for some processes like configuring and deconfiguring. The Status is freezed.
-
Does your container uses rabbitmq? Do you see any issue having multiple replicas and controlling which replica is taking the data and processing it? It uses RabbitMQ to communicate with the ia-nbi. 'State' (the correlation ID) is passed in each message: we have to be sure that only one message is sent to the ia-nbi on a request.
-
Best Deployment strategy? Anti-affinity, Affinity. To which container?