This is the central repository for a REMLA project by Group 21. The application performs sentiment analysis on user feedback using a machine learning model. This repository orchestrates the following components hosted in separate repositories:
model-training: Contains the machine learning training pipeline.lib-ml: Contains data pre-processing logic used across components.model-service: A wrapper service for the trained ML model. Exposes API endpoints to interact with the model.lib-version: A version-aware utility library that exposes version metadata.app: Contains the application frontend and backend (user interface and service logic).
- How to Start the Application (Assignment 1)
- Kubernetes Cluster Provisioning (Assignment 2)
- Kubernetes Cluster Monitoring (Assignment 3)
- ML Configuration Management & ML Testing (Assignment 4)
- Istio Service Mesh(Assignment 5)
- Known Issue: macOS Port Conflict (AirPlay Receiver)
- Activity Tracking
- Grade Expectation
-
Clone the repository:
git clone https://github.com/remla25-team21/operation.git
-
Navigate into the project directory and start the app with Docker Compose:
cd kubernetes docker-compose pull && docker-compose up -d
The frontend will be available at http://localhost:3000 by default.
Kindly refer to additonal steps provided in instructions related to Assignment 5 since introducing Istio brought additional complexities, and certain initial setups need to be done before proceeding. (Especially moving the correct rate-limit.yaml file)
These steps guide you through setting up the Kubernetes cluster on your local machine using Vagrant and Ansible, and deploying the Kubernetes Dashboard.
-
Install GNU parallel: Before running the setup script, make sure GNU parallel is installed on your system:
-
For Debian/Ubuntu:
sudo apt-get install parallel
-
For Red Hat/CentOS:
sudo yum install parallel
-
For macOS:
brew install parallel
-
-
Run the setup script:
chmod +x setup_cluster.sh ./setup_cluster.sh
-
Access Kubernetes dashboard:
- After the script completes, open your web browser and navigate to:
https://dashboard.local(HTTPS is required). - You will see a token displayed in your terminal. Copy and paste this token into the Kubernetes Dashboard login page.
- After the script completes, open your web browser and navigate to:
-
Remove the cluster: If you want to remove the cluster, run the following command:
vagrant destroy -f
This will remove all the VMs and the Kubernetes cluster.
Refer to README.md in the kubernetes/helm/sentiment-analysis directory for instructions to set up Prometheus and Grafana for monitoring.
Work for Assignment 4 is mainly in the following repositories:
See their READMEs for setup and testing details.
Two methods are available for deploying the application with Istio service mesh:
Run the following command to start up the local Kubernetes cluster. (Make sure that you have GNU Parallel installed. Details in Section 2. )
-
Run the following commands to properly configure the setup for Vagrant: First, backing up the existing file:
mv kubernetes/helm/sentiment-analysis/templates/rate-limit.yaml \ kubernetes/extra/rate-limit.minikube.yaml
Then, moving the required file:
mv kubernetes/extra/rate-limit.vagrant.yaml \ kubernetes/helm/sentiment-analysis/templates/rate-limit.yaml
-
Start the local cluster:
chmod +x setup_cluster.sh ./setup_cluster.sh
-
SSH into the control node:
vagrant ssh ctrl
-
Deploy the application using Helm:
cd /vagrant GATEWAY_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}') helm install my-sentiment-analysis ./kubernetes/helm/sentiment-analysis --set istio.ingressGateway.host=$GATEWAY_IP
[!NOTE] It may take a few minutes for all pods to become ready. You can monitor the status with:
kubectl get pods
-
Access the frontend from
http://192.168.56.91.
Sticky routing is enabled in DestinationRule. You can use curl to simulate multiple users:
for i in {1..5}; do curl -s -H "user: 6" http://192.168.56.91/env-config.js; done
for i in {1..5}; do curl -s -H "user: 10" http://192.168.56.91/env-config.js; doneUsers 6 and 10 should always see the same version on each reload.
This alternative approach uses Minikube directly on your local machine without Vagrant/Ansible.
If you previously configured the rate limiting setup for Vagrant, and now want to revert to the default Minikube setup, follow these steps:
mv kubernetes/helm/sentiment-analysis/templates/rate-limit.yaml \
kubernetes/extra/rate-limit.vagrant.yamlmv kubernetes/extra/rate-limit.minikube.yaml \
kubernetes/helm/sentiment-analysis/templates/rate-limit.yamlNote: If you never configured the project for Vagrant, you can ignore this step — the default Minikube configuration is already in place.
We provide an automated script that handles the entire setup process:
chmod +x start_minikube.sh
./start_minikube.sh --step 1
minikube tunnel # Keep this running in a separate terminal
./start_minikube.sh --step 2Note
Please refer to the Manual Setup and Deploy section below if you encounter any issues with the script or prefer to run commands individually.
This script will:
- Delete any existing Minikube clusters
- Start Minikube with appropriate resources
- Install Prometheus stack
- Install Istio and its add-ons
- Deploy the application
- Start the Minikube tunnel
- Display access URLs for all services
The script will output instructions for accessing all components when it completes.
If you prefer to run commands individually:
-
Clean up any existing Minikube clusters:
minikube delete --all
-
Start and configure Minikube:
minikube start --memory=4096 --cpus=4 --driver=docker minikube addons enable ingressNote: Resource requirements (4GB RAM, 4 CPUs) can be adjusted based on your machine's capabilities.
-
Install Prometheus stack using Helm:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring --create-namespace
-
Install Istio and its add-ons:
istioctl install -y kubectl apply -f kubernetes/istio-addons/prometheus.yaml kubectl apply -f kubernetes/istio-addons/jaeger.yaml kubectl apply -f kubernetes/istio-addons/kiali.yaml kubectl label ns default istio-injection=enabled --overwrite
On Apple Silicon Macs, the default file-sharing mechanism for Minikube is more restrictive. To allow the application's hostPath volume to mount correctly, you must first manually create a link between your Mac and the Minikube VM.
Create a local directory on your Mac (y:
mkdir -p ~/data/sharedOpen the mount tunnel:
minikube mount ~/data/shared:/mnt/shared # Keep this running in a separate terminalYou must keep this mount command running in its own terminal before proceeding with the steps below.
-
Open the tunnel for Istio ingress gateway:
minikube tunnel # Keep this running in a `separate` terminal -
Deploy the application using Helm:
GATEWAY_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}') helm install my-sentiment-analysis ./kubernetes/helm/sentiment-analysis --set istio.ingressGateway.host=$GATEWAY_IP
-
Forward necessary ports in separate terminals:
kubectl -n monitoring port-forward svc/prometheus-kube-prometheus-prometheus 9090:9090 kubectl -n monitoring port-forward service/prometheus-grafana 3300:80 kubectl -n istio-system port-forward svc/kiali 20001:20001
Note: Keep these commands running in separate terminals.
-
Access different interfaces:
kubectl get svc istio-ingressgateway -n istio-system
- Application: Access the url output by
kubectl get svc istio-ingressgateway -n istio-systemas [EXTERNAL-IP]. - Prometheus:
http://localhost:9090 - Grafana:
http://localhost:3300 - Kiali:
http://localhost:20001
- Application: Access the url output by
For this setup, test sticky sessions with:
for i in {1..5}; do curl -s -H "user: 6" http://[EXTERNAL-IP]/env-config.js; done
for i in {1..5}; do curl -s -H "user: 10" http://[EXTERNAL-IP]/env-config.js; doneWe used Istio’s traffic routing to run an A/B test between two frontend versions. Prometheus collected usage and satisfaction metrics, and the outcome was visualized in Grafana. Details are in docs/continuous-experimentation.md.
To protect the application from abuse and ensure fair usage across users, we implemented rate limiting using an Istio EnvoyFilter. This configuration limits each unique x-user-id header to 10 requests per minute on the inbound sidecar.
We used two EnvoyFilter resources:
- The first inserts the
envoy.filters.http.local_ratelimitfilter into the inbound HTTP filter chain. It defines a token bucket allowing 10 requests every 60 seconds per user. - The second configures route-level rate limits by matching the
x-user-idheader and enforcing the per-user descriptor.
The response will include a custom header x-local-rate-limit: true when rate limiting is triggered.
To test rate limiting: Vagrant: Send more than 10 requests a minute, rate limiting will be applied, however, at a global scale. Minikube: Run the following:
for i in {1..12}; do curl -s -o /dev/null -w "User 6 - Request $i: %{http_code}\n" -H "x-user-id: 6" http://127.0.0.1/env-config.js; done And then run immediately after:
for i in {1..12}; do curl -s -o /dev/null -w "User 8 - Request $i: %{http_code}\n" -H "x-user-id: 8" http://127.0.0.1/env-config.js; done You will be able to see that both users are able to send 10 requests individually, before being rate limited, proving that rate limiting of 10 is applied per unique user id.
If app-service fails to bind to port 5000, macOS's AirPlay Receiver may be using it.
Temporary Workaround
- Go to System Settings -> General -> Airdrop & Handoff and switch off Airplay Receiver.
- Go to the terminal and use kill any process on port 5000:
lsof -i :5000 kill -9 <PID>
Long Term Fix
We plan to eventually change app-service to accommodate environment variables, which should allow users to freely change ports via the docker-compose.yml file.
See ACTIVITY.md for an overview of team contributions.
To assist with the evaluation of our project, we have included a Grade_Expectation.md that outlines how our implementation aligns with the grading criteria. This document is intended to make the grading process more straightforward and transparent.