Skip to content

Milestone 3

nikhilmankame edited this page May 5, 2020 · 23 revisions

Part A : Project Proposal

Implementation of Service Mesh Technology for Code-storm Micro services

Introduction:

The Milestone 3 for Code-storm team comprises of the implementation of service mesh technology across all the micro services. This technology manages all the service to service communication and resides at the infrastructure layer of the system. This is typically achieved via usage of web proxy server OR "side-car" proxies that are deployed along with the each micro service via which all the system traffic is routed.

Problem Statement:

The Code-storm project comprises of 5 micro services namely User Management, Session Management, Data Retrieval, Model Execution & Post Processing along with an API gateway that connects these services to the User Interface. Each of the micro services has been containerized using Docker and the communication between the same has been made possible through Apache Kafka. However there exist problems related to the following parameters in the system that need to be addressed:

Management of Traffic Flow

Currently all the micro services "talk" to each other via direct internal communication. This can cause a problem if the system needs to be scaled up

Security Layer

The system is vulnerable to the security threats

Load Balancing

The system is susceptible to a very large number of requests

Logging and Monitoring

The system lacks a detailed logging mechanism that can aid in detecting possible causes of failure

Proposed Solution:

In order to overcome the above-mentioned pitfalls, the team plans to use the service mesh technology using Istio. The Istio based service mesh technology coupled with Kubernetes Cluster deployment will help overcome the above-mentioned problems viz externalizing service communications and the related configurations, enforcing security measures via Transport Level Security (TLS) and policies such as Access Control Lists (ACLs), maintaining the load balancing via configurable algorithms, enabling the request logging & collecting the metrics

Implementation Plan:

  1. Install Instio Control Pane in the Kubernetes Cluster.
  2. Enable "side-car" proxies for each of the micro services.
  3. Initially target the test pods through single service in a cluster
  4. Add Envoy proxy configurations to each of the pods to generate a new YAML file
  5. Upon the deployment of the new file, each pod will have two containers one for the actual application and the another for the Envoy proxy
  6. Execution of the creation command on the new YAML file results in the start of the Envoy proxy "side-car"
  7. Deploy the minimal Istio resources in order to route traffic to service and pods
  8. Use the Vanilla Kubernetes approach to create a second deployment via a new docker image tagged to the same pod label and eventually deploy all the micro services using the above-mentioned steps