We designed a simple yet efficient logging solution to simplify log processing and predict anomalies.
- Fluent Bit collects logs.
- Sends logs to Kafka.
- Alloy processes and transforms the logs from kafka.
- Alloy Forwards logs to Loki.
- Loki stores logs as indexes and chunks in MinIO.
- Drain3 queries logs from Loki and sends them to Redis.
- Deeplog reads the logs from Redis to train the model.
- Grafana visualizes the logs from Loki and anomalies from Deeplog.
My current setup on which I am running this logging solution:
- Device: Apple M4 Max
- RAM: 36 GB
- macOS Version: 15.3.1 (24D70)
- OS: Linux , macOS, or Windows
- Docker: Installed
- Kubernetes Cluster: Installed
- Grafana and Loki: Installed
- Helm: Installed
- Fluentbit: Installed and configured
We use these tools to develop this solution:
The purpose of using a KIND cluster is to deploy Kubernetes in a local environment. Docker is required to run a KIND cluster. If you already have a Kubernetes cluster, there is no need to deploy a KIND cluster.
chmod 777 /kindcluster/install_kind.sh ./kindcluster/install_kind.sh # For Linux
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.28.0/kind-darwin-amd64 # For Intel Mac
[ $(uname -m) = arm64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.28.0/kind-darwin-arm64 # For M1 / ARM Mac
chmod +x ./kindbrew install kubectl # For MacFor setting up kubectl on Linux, follow this tutorial
kubectl version
./kind --version # For Mac
kind --version # For Linux./kind create cluster --name=mycluster --config=./kindcluster/config.yaml # For Mac
kind create cluster --name=mycluster --config=./kindcluster/config.yaml # For Linuxkubectl get nodessoftwareupdate --install-rosetta --agree-to-license # For M1 / ARM Macdocker run --privileged --rm tonistiigi/binfmt --install all # For M1 / ARM Macdocker run --rm --platform linux/amd64 alpine uname -m # For M1 / ARM Maccd src/Nopayloaddbkubectl create namespace npps kubectl create -f secret.yaml -f django-service.yaml -f django-deployment.yaml -f postgres-service.yaml -f postgres-deployment.yamlkubectl port-forward deployment/npdb 8000:8000 -n npps http://localhost:8000/api/cdb_rest/payloadiovs/?gtName=sPHENIX_ExampleGT_24&majorIOV=0&minorIOV=999999kubectl delete ns grafana-lokikubectl create ns kafkakubectl config set-context --current --namespace=kafkacd fluentbitkubectl create -f cluster-role.yaml -f clusterrole-binding.yaml -f service-account.yamlkubectl create -f fluent-bit-configmap.yaml
kubectl create -f fluent-bit-daemonset.yamlcd ..
cd kafkakafka create -n kafka -f kafka-pvc.yaml -f kafka-statefulset.yaml -f kafka-service.yamlhelm install kafka-ui kafka-ui/kafka-ui --values kafka-ui.yamlkubectl port-forward svc/kafka-ui 8081:80Open your browser and navigate to: http://localhost:8081
-
Go to Topics
-
Click on ops.kube-logs-fluentbit.stream.json.001
-
View Messages (logs)
We can now access logs from Nopayloaddb
Configuring alloy which are acting as a consumer of kafka and get logs from kafka. and then sends logs to loki.
cd ..
cd lokikubectl create ns monitoringkubectl config set-context --current --namespace=monitoringkubectl create -f loki-pvc.yamlhelm repo add grafana https://grafana.github.io/helm-charthelm upgrade --install --values all-values.yaml loki grafana/loki-stack -n monitoringhelm upgrade --install alloy grafana/alloy -n monitoring --values configalloy.yamlkubectl get secret loki-grafana -o jsonpath="{.data.admin-user}" | base64 --decodekubectl get secret loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decodekubectl port-forward svc/loki-grafana 3000:80Open the Grafana web UI by visiting http://localhost:3000 Then go to Connections > Data sources, select Loki and go to Explore to show the logs of the payload.
Loki sends logs to MinIO for storing log indexes and chunks.
cd ..
cd miniokubectl create ns miniokubectl config set-context --current --namespace=miniokubectl create -f minio-newdeploy.yaml -f minio-service.yaml -f minio-pvc.yaml -f minio-secret.yaml kubectl port-forward svc/minio-service 9090:9090 -n minioOpen the MinIO web UI by visiting http://localhost:9090 and create a bucket named logs. You will then see the logs stored as indexes and chunks.
By default, the username and password of the MinIO UI are minioadmin. We will replace them using Kubernetes secrets.
Drain3 queries the logs from Loki and sends the npps logs to Redis. Deeplog then reads these logs and trains the model. Python files are also included so anyone can adjust the parameters of Drain3 and Deeplog before using them. Drain3 is configured as a CronJob in Kubernetes, and it can run every few minutes.
k create ns drain3kubectl config set-context --current --namespace drain3cd src/Drain3k create -f drain3.yaml -f redis.yamlBefore configuring Deeplog we have to recreate fluentbit connfigmap and daemon set to allow the anomalies detected by deeplog visible on grafana dashboard
cd src/deeplog/srckubectl create ns deeplogk create -f deeplog.yaml -n deeplogTo view the anomalies we have to repeat the steps to use Grafana dashboard.
kubectl get secret loki-grafana -o jsonpath="{.data.admin-user}" | base64 --decodekubectl get secret loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decodekubectl port-forward svc/loki-grafana 3000:80 -n monitoringOpen the Grafana web UI by visiting http://localhost:3000
Then go to Connections > Data sources, select Loki and go to Explore and you have to select namsepace called deeplog to view anomalies.