Skip to content
This repository has been archived by the owner on May 17, 2019. It is now read-only.

Commit

Permalink
Merge pull request #36 from carlossg/flagger
Browse files Browse the repository at this point in the history
More Flagger demo work
  • Loading branch information
carlossg-bot authored Jan 31, 2019
2 parents c259c2a + a872c13 commit 022ebaf
Show file tree
Hide file tree
Showing 6 changed files with 110 additions and 13 deletions.
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,13 @@ For those that have dreamt to hunt crocs

Basic go webserver to demonstrate example CI/CD pipeline using Kubernetes

## Injecting Delays and Errors

Making requests to these urls will cause the app to delay the response or to respond with an error, and can be useful to simulate real life errors.

/delay?wait=5
/status?code=500

# Deploy using JenkinsX (Kubernetes, Helm, Monocular, ChartMuseum)

Just follow the [JenkinsX](http://jenkins-x.io) installation with `--prow=true`
Expand Down
53 changes: 53 additions & 0 deletions charts/croc-hunter-jenkinsx/templates/canary.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
{{- if eq .Release.Namespace "jx-production" }}
apiVersion: flagger.app/v1alpha2
kind: Canary
metadata:
# canary name must match deployment name
name: jx-production-croc-hunter-jenkinsx
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: jx-production-croc-hunter-jenkinsx
# HPA reference (optional)
# autoscalerRef:
# apiVersion: autoscaling/v2beta1
# kind: HorizontalPodAutoscaler
# name: jx-production-croc-hunter-jenkinsx
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
service:
# container port
port: 8080
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
# Istio virtual service host names (optional)
hosts:
- croc-hunter.istio.us.g.csanchez.org
- croc-hunter.istio.eu.g.csanchez.org
canaryAnalysis:
# schedule interval (default 60s)
interval: 15s
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 50
# canary increment step
# percentage (0-100)
stepWeight: 10
metrics:
- name: istio_requests_total
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 30s
- name: istio_request_duration_seconds_bucket
# maximum req duration P99
# milliseconds
threshold: 500
interval: 30s
{{- end }}
20 changes: 20 additions & 0 deletions croc-hunter.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ import (
"log"
"net/http"
"os"
"strconv"
"time"
)

var release = os.Getenv("WORKFLOW_RELEASE")
Expand Down Expand Up @@ -104,6 +106,24 @@ func handler(w http.ResponseWriter, r *http.Request) {
return
}

if r.URL.Path == "/delay" {
delay, _ := strconv.Atoi(r.URL.Query().Get("wait"))
if delay <= 0 {
delay = 10
}
time.Sleep(time.Duration(delay) * time.Second)
w.WriteHeader(http.StatusOK)
fmt.Fprintf(w, "{delay: %d}", delay)
return
}

if r.URL.Path == "/status" {
code, _ := strconv.Atoi(r.URL.Query().Get("code"))
w.WriteHeader(code)
fmt.Fprintf(w, "{code: %d}", code)
return
}

hostname, err := os.Hostname()
if err != nil {
log.Fatalf("could not get hostname: %s", err)
Expand Down
23 changes: 19 additions & 4 deletions flagger/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,27 @@

Install [Flagger](https://docs.flagger.app/install/install-flagger)

Enable Istio in the jx-staging and jx-production namespaces
Enable Istio in the `jx-staging` and `jx-production` namespaces for metrics gathering

kubectl patch ns jx-staging --type=json -p='[{"op": "add", "path": "/metadata/labels/istio-injection", "value": "enabled"}]'
kubectl patch ns jx-production --type=json -p='[{"op": "add", "path": "/metadata/labels/istio-injection", "value": "enabled"}]'
kubectl label namespace jx-staging istio-injection=enabled
kubectl label namespace jx-production istio-injection=enabled


Create the canary object that will add our deployment to Flagger
Create the canary object that will add our deployment to Flagger. This is already created by the Helm chart when promoting to `jx-production` namespace.

kubectl create -f croc-hunter-canary.yaml

Optional: Create a `ServiceEntry` to allow traffic to the Google metadata api to display the region

kubectl create -f ../istio/google-api.yaml

# Grafana dashboard

kubectl --namespace istio-system port-forward deploy/flagger-grafana 3000

Access it at [http://localhost:3000](http://localhost:3000) using admin/admin
Go to the `canary-analysis` dashboard and select

* namespace: `jx-production`
* primary: `jx-production-croc-hunter-jenkinsx-primary`
* canary: `jx-production-croc-hunter-jenkinsx`
15 changes: 8 additions & 7 deletions flagger/croc-hunter-canary.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,19 @@ apiVersion: flagger.app/v1alpha2
kind: Canary
metadata:
# canary name must match deployment name
name: jx-staging-croc-hunter-jenkinsx
namespace: jx-staging
name: jx-production-croc-hunter-jenkinsx
namespace: jx-production
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: jx-staging-croc-hunter-jenkinsx
name: jx-production-croc-hunter-jenkinsx
# HPA reference (optional)
# autoscalerRef:
# apiVersion: autoscaling/v2beta1
# kind: HorizontalPodAutoscaler
# name: jx-staging-croc-hunter-jenkinsx
# name: jx-production-croc-hunter-jenkinsx
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
Expand All @@ -26,10 +26,11 @@ spec:
- public-gateway.istio-system.svc.cluster.local
# Istio virtual service host names (optional)
hosts:
- croc-hunter.istio.g.csanchez.org
- croc-hunter.istio.us.g.csanchez.org
- croc-hunter.istio.eu.g.csanchez.org
canaryAnalysis:
# schedule interval (default 60s)
interval: 1m
interval: 15s
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
Expand All @@ -43,7 +44,7 @@ spec:
# minimum req success rate (non 5xx responses)
# percentage (0-100)
threshold: 99
interval: 1m
interval: 30s
- name: istio_request_duration_seconds_bucket
# maximum req duration P99
# milliseconds
Expand Down
5 changes: 3 additions & 2 deletions istio/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ Install [Istio](https://istio.io/docs/setup/kubernetes/quick-start/)

## Access from the Internet using Istio ingress gateway

Istio will route the traffic entering through the ingress gateway.
Find the [ingress gateway ip address](https://istio.io/docs/tasks/traffic-management/ingress/#determining-the-ingress-ip-and-ports) and configure a wildcard DNS for it.

For example map `*.example.com` to
Expand All @@ -24,8 +25,8 @@ If you need to access the service through Istio from inside the cluster (not nee

Enable Istio in the jx-staging and jx-production namespaces

kubectl patch ns jx-carlossg-croc-hunter-jenkinsx-serverless-pr-35 --type=json -p='[{"op": "add", "path": "/metadata/labels/istio-injection", "value": "enabled"}]'
kubectl patch ns jx-production --type=json -p='[{"op": "add", "path": "/metadata/labels/istio-injection", "value": "enabled"}]'
kubectl label namespace jx-staging istio-injection=enabled
kubectl label namespace jx-production istio-injection=enabled

Optional: Create a `ServiceEntry` to allow traffic to the Google metadata api to display the region

Expand Down

0 comments on commit 022ebaf

Please sign in to comment.