Skip to content
This repository was archived by the owner on Sep 3, 2025. It is now read-only.

remla25-team21/operation

Repository files navigation

Operation Repository

This is the central repository for a REMLA project by Group 21. The application performs sentiment analysis on user feedback using a machine learning model. This repository orchestrates the following components hosted in separate repositories:

  • model-training: Contains the machine learning training pipeline.
  • lib-ml: Contains data pre-processing logic used across components.
  • model-service: A wrapper service for the trained ML model. Exposes API endpoints to interact with the model.
  • lib-version: A version-aware utility library that exposes version metadata.
  • app: Contains the application frontend and backend (user interface and service logic).

Table of Contents

How to Start the Application (Assignment 1)

  1. Clone the repository:

    git clone https://github.com/remla25-team21/operation.git
  2. Navigate into the project directory and start the app with Docker Compose:

    cd kubernetes
    docker-compose pull && docker-compose up -d

The frontend will be available at http://localhost:3000 by default.

Kubernetes Cluster Provisioning (Assignment 2)

Kindly refer to additonal steps provided in instructions related to Assignment 5 since introducing Istio brought additional complexities, and certain initial setups need to be done before proceeding. (Especially moving the correct rate-limit.yaml file)

These steps guide you through setting up the Kubernetes cluster on your local machine using Vagrant and Ansible, and deploying the Kubernetes Dashboard.

  1. Install GNU parallel: Before running the setup script, make sure GNU parallel is installed on your system:

    • For Debian/Ubuntu:

      sudo apt-get install parallel
    • For Red Hat/CentOS:

      sudo yum install parallel
    • For macOS:

      brew install parallel
  2. Run the setup script:

    chmod +x setup_cluster.sh
    ./setup_cluster.sh
  3. Access Kubernetes dashboard:

    • After the script completes, open your web browser and navigate to: https://dashboard.local (HTTPS is required).
    • You will see a token displayed in your terminal. Copy and paste this token into the Kubernetes Dashboard login page.
  4. Remove the cluster: If you want to remove the cluster, run the following command:

    vagrant destroy -f

    This will remove all the VMs and the Kubernetes cluster.

Kubernetes Cluster Monitoring (Assignment 3)

Refer to README.md in the kubernetes/helm/sentiment-analysis directory for instructions to set up Prometheus and Grafana for monitoring.

ML Configuration Management & ML Testing (Assignment 4)

Work for Assignment 4 is mainly in the following repositories:

See their READMEs for setup and testing details.

Istio Service Mesh(Assignment 5)

Two methods are available for deploying the application with Istio service mesh:

  • Method 1: Using Vagrant/Ansible Cluster from Assignment 2
  • Method 2: Using Local Minikube

Method 1: Using Vagrant/Ansible Cluster

Run the following command to start up the local Kubernetes cluster. (Make sure that you have GNU Parallel installed. Details in Section 2. )

Deploy the Istio-based Setup

  1. Run the following commands to properly configure the setup for Vagrant: First, backing up the existing file:

    mv kubernetes/helm/sentiment-analysis/templates/rate-limit.yaml \
    kubernetes/extra/rate-limit.minikube.yaml

    Then, moving the required file:

    mv kubernetes/extra/rate-limit.vagrant.yaml \
    kubernetes/helm/sentiment-analysis/templates/rate-limit.yaml
  2. Start the local cluster:

    chmod +x setup_cluster.sh
    ./setup_cluster.sh
  3. SSH into the control node:

    vagrant ssh ctrl
  4. Deploy the application using Helm:

    cd /vagrant
    GATEWAY_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    helm install my-sentiment-analysis ./kubernetes/helm/sentiment-analysis --set istio.ingressGateway.host=$GATEWAY_IP

    [!NOTE] It may take a few minutes for all pods to become ready. You can monitor the status with:

    kubectl get pods
  5. Access the frontend from http://192.168.56.91.

Verify Sticky Sessions

Sticky routing is enabled in DestinationRule. You can use curl to simulate multiple users:

for i in {1..5}; do curl -s -H "user: 6" http://192.168.56.91/env-config.js; done
for i in {1..5}; do curl -s -H "user: 10" http://192.168.56.91/env-config.js; done

Users 6 and 10 should always see the same version on each reload.

Method 2: Using Local Minikube

This alternative approach uses Minikube directly on your local machine without Vagrant/Ansible.

Before Starting:

If you previously configured the rate limiting setup for Vagrant, and now want to revert to the default Minikube setup, follow these steps:

mv kubernetes/helm/sentiment-analysis/templates/rate-limit.yaml \
kubernetes/extra/rate-limit.vagrant.yaml
mv kubernetes/extra/rate-limit.minikube.yaml \
kubernetes/helm/sentiment-analysis/templates/rate-limit.yaml

Note: If you never configured the project for Vagrant, you can ignore this step — the default Minikube configuration is already in place.

Quick Start with Automated Script

We provide an automated script that handles the entire setup process:

chmod +x start_minikube.sh
./start_minikube.sh --step 1

minikube tunnel  # Keep this running in a separate terminal

./start_minikube.sh --step 2

Note

Please refer to the Manual Setup and Deploy section below if you encounter any issues with the script or prefer to run commands individually.

This script will:

  • Delete any existing Minikube clusters
  • Start Minikube with appropriate resources
  • Install Prometheus stack
  • Install Istio and its add-ons
  • Deploy the application
  • Start the Minikube tunnel
  • Display access URLs for all services

The script will output instructions for accessing all components when it completes.

Manual Setup and Deploy

If you prefer to run commands individually:

  1. Clean up any existing Minikube clusters:

    minikube delete --all 
  2. Start and configure Minikube:

    minikube start  --memory=4096 --cpus=4 --driver=docker
    minikube addons enable ingress

    Note: Resource requirements (4GB RAM, 4 CPUs) can be adjusted based on your machine's capabilities.

  3. Install Prometheus stack using Helm:

    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo update
    helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace
  4. Install Istio and its add-ons:

    istioctl install -y
    kubectl apply -f kubernetes/istio-addons/prometheus.yaml
    kubectl apply -f kubernetes/istio-addons/jaeger.yaml
    kubectl apply -f kubernetes/istio-addons/kiali.yaml
    kubectl label ns default istio-injection=enabled --overwrite

⚠️ Important Note for Apple Silicon (M1/M2/M3) Users (Else you can skip this step)

On Apple Silicon Macs, the default file-sharing mechanism for Minikube is more restrictive. To allow the application's hostPath volume to mount correctly, you must first manually create a link between your Mac and the Minikube VM.

Create a local directory on your Mac (y:

mkdir -p ~/data/shared

Open the mount tunnel:

minikube mount ~/data/shared:/mnt/shared # Keep this running in a separate terminal

You must keep this mount command running in its own terminal before proceeding with the steps below.

  1. Open the tunnel for Istio ingress gateway:

    minikube tunnel  # Keep this running in a `separate` terminal
  2. Deploy the application using Helm:

    GATEWAY_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    helm install my-sentiment-analysis ./kubernetes/helm/sentiment-analysis --set istio.ingressGateway.host=$GATEWAY_IP
  3. Forward necessary ports in separate terminals:

    kubectl -n monitoring port-forward svc/prometheus-kube-prometheus-prometheus 9090:9090
    kubectl -n monitoring port-forward service/prometheus-grafana 3300:80
    kubectl -n istio-system port-forward svc/kiali 20001:20001

    Note: Keep these commands running in separate terminals.

  4. Access different interfaces:

    kubectl get svc istio-ingressgateway -n istio-system

Verify Sticky Sessions

For this setup, test sticky sessions with:

for i in {1..5}; do curl -s -H "user: 6" http://[EXTERNAL-IP]/env-config.js; done
for i in {1..5}; do curl -s -H "user: 10" http://[EXTERNAL-IP]/env-config.js; done

Continuous Experimentation

We used Istio’s traffic routing to run an A/B test between two frontend versions. Prometheus collected usage and satisfaction metrics, and the outcome was visualized in Grafana. Details are in docs/continuous-experimentation.md.

Additional Use Case: Rate Limiting

To protect the application from abuse and ensure fair usage across users, we implemented rate limiting using an Istio EnvoyFilter. This configuration limits each unique x-user-id header to 10 requests per minute on the inbound sidecar.

We used two EnvoyFilter resources:

  • The first inserts the envoy.filters.http.local_ratelimit filter into the inbound HTTP filter chain. It defines a token bucket allowing 10 requests every 60 seconds per user.
  • The second configures route-level rate limits by matching the x-user-id header and enforcing the per-user descriptor.

The response will include a custom header x-local-rate-limit: true when rate limiting is triggered.

To test rate limiting: Vagrant: Send more than 10 requests a minute, rate limiting will be applied, however, at a global scale. Minikube: Run the following:

for i in {1..12}; do curl -s -o /dev/null -w "User 6 - Request $i: %{http_code}\n" -H "x-user-id: 6" http://127.0.0.1/env-config.js; done    

And then run immediately after:

for i in {1..12}; do curl -s -o /dev/null -w "User 8 - Request $i: %{http_code}\n" -H "x-user-id: 8" http://127.0.0.1/env-config.js; done    

You will be able to see that both users are able to send 10 requests individually, before being rate limited, proving that rate limiting of 10 is applied per unique user id.

Known Issue: macOS Port Conflict (AirPlay Receiver)

If app-service fails to bind to port 5000, macOS's AirPlay Receiver may be using it.

Temporary Workaround

  1. Go to System Settings -> General -> Airdrop & Handoff and switch off Airplay Receiver.
  2. Go to the terminal and use kill any process on port 5000:
    lsof -i :5000
    kill -9 <PID>

Long Term Fix

We plan to eventually change app-service to accommodate environment variables, which should allow users to freely change ports via the docker-compose.yml file.

Activity Tracking

See ACTIVITY.md for an overview of team contributions.

Grade Expectation

To assist with the evaluation of our project, we have included a Grade_Expectation.md that outlines how our implementation aligns with the grading criteria. This document is intended to make the grading process more straightforward and transparent.