-
Notifications
You must be signed in to change notification settings - Fork 4
Milestone 3.1 : Project Proposal
The following issues are some of the known issues of our current Project:
-
Deployment: In our current deployment process it is impossible to upgrade the version of any micro-service with zero downtime. In addition to that, Upgrading of a service doesn't take into account of the requests that are currently being handled by it. This leads to problems of having some requests in incomplete state forever.
-
Latency and errors: The message broker (Kafka) in our application might sometimes take more time to initialize and by the time everything is up and running we might lose the request. These types of issues are difficult to debug since this might also happen due to an invalid query.
-
We plan to solve the deployment issues using different deployment strategies like blue-green deployment, canary deployment using Istio a service mesh platform. Additionally we also plan to explore and implement strategies similar to locks to do safe upgrade of a service.
-
To solve the latency issues we have to monitor the performance of the code, analyze errors, and trace each service request in our application. Service mesh frameworks such as Istio provide built-in distributed tracing features and eliminate the need to change our code.
For the initial phase we plan to improve our deployment strategy using Istio, to start with setup Istio and Envoy as its sidecar proxy.
- First, test the new version end-to-end in production.
- Inject sidecar proxies transparently into our application pods.
- Enable Istio transparently on all micro-services.
- Configure the traffic to enter through an Istio ingress gateway instead of accessing the application from outside via Kubernetes Ingress gateway.
- Now, Implement the different deployment strategies using Istio.
Further we plan to implement the built-in Istio's distributed tracing features to solve the latency issues.
- We plan to study and implement various options that Istio provides like Zipkin, Jaeger and Lightstep for tracing backend and configure proxies to send trace spans.
Distributed tracing enables to track a request through mesh that is distributed across multiple services.This allows a deeper understanding about request latency, serialization and parallelism via visualization.