Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate istio VirtualService/DestinationRule to APIversion v1beta1 (current v1alpha3) #1602

Merged
merged 1 commit into from
Mar 26, 2024

Conversation

benoitg31
Copy link
Contributor

@benoitg31 benoitg31 commented Feb 28, 2024

this is an APIversion version upgrade for istio DestinationRules and VirtualServices.
It may be needed in corner cases when only the v1beta1 is available (for instance using vCluster)
V1beta1 is available since istio 1.5.

@benoitg31 benoitg31 force-pushed the main branch 2 times, most recently from 8db6fae to 589bbda Compare February 28, 2024 13:39
@codecov-commenter
Copy link

codecov-commenter commented Feb 28, 2024

Codecov Report

Attention: Patch coverage is 94.02985% with 4 lines in your changes are missing coverage. Please review.

Project coverage is 56.64%. Comparing base (285ee6e) to head (217db66).
Report is 20 commits behind head on main.

Files Patch % Lines
pkg/router/istio.go 93.84% 4 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1602   +/-   ##
=======================================
  Coverage   56.64%   56.64%           
=======================================
  Files          85       85           
  Lines        8543     8543           
=======================================
  Hits         4839     4839           
  Misses       3033     3033           
  Partials      671      671           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

…inationRule instead of v1alpha1

Signed-off-by: Benoit Gaillard <benoit.gaillard@continental-corporation.com>
@benoitg31 benoitg31 marked this pull request as ready for review February 28, 2024 14:34
@benoitg31 benoitg31 marked this pull request as draft February 29, 2024 10:46
@@ -73,15 +73,15 @@ func (ir *IstioRouter) Reconcile(canary *flaggerv1.Canary) error {
}

func (ir *IstioRouter) reconcileDestinationRule(canary *flaggerv1.Canary, name string) error {
newSpec := istiov1alpha3.DestinationRuleSpec{
newSpec := istiov1beta1.DestinationRuleSpec{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would this impact existing canaries and Istio objects? Could you please test by upgrading an existing cluster to this new version of Flagger.

Copy link
Contributor Author

@benoitg31 benoitg31 Mar 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @stefanprodan ,
I could finally perform the upgrade test:

arch/OS: arm64, osX (14.2.1)
docker: 24.0.7
kind: v0.22.0 go1.21.7 darwin/arm64
go: go1.20.14 darwin/arm64
istioctl : client version: 1.20.0

  • create a 1.25 cluster with kind
    cluster-1.25.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: 1-25
nodes:
- role: control-plane
  image: kindest/node:v1.25.16@sha256:e8b50f8e06b44bb65a93678a65a26248fae585b3d3c2a669e5ca6c90c69dc519
- role: worker
  image: kindest/node:v1.25.16@sha256:e8b50f8e06b44bb65a93678a65a26248fae585b3d3c2a669e5ca6c90c69dc519
❯ kind create cluster --config cluster-1.25.yaml
Creating cluster "1-25" ...
 ✓ Ensuring node image (kindest/node:v1.25.16) 🖼 
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
  • install istio
> kind load docker-image docker.io/istio/pilot:1.20.0 --name 1-25
> kind load docker-image docker.io/istio/proxyv2:1.20.0 --name 1-25
> k create ns istio-system
> istioctl install --set profile=default
This will install the Istio 1.20.0 "default" profile (with components: Istio core, Istiod, and Ingress gateways) into the cluster. Proceed? (y/N) y
✔ Istio core installed   
  • install flagger (latest version to use istio v1alpha3 API)
> kind load docker-image ghcr.io/fluxcd/flagger:1.36.1 --name 1-25
Image: "ghcr.io/fluxcd/flagger:1.36.1" with ID "sha256:4defb598d160013b9bc0e66ec20a788a0e22e60738fed8a5961496188fe20cc8" not yet present on node "1-25-control-plane", loading...
Image: "ghcr.io/fluxcd/flagger:1.36.1" with ID "sha256:4defb598d160013b9bc0e66ec20a788a0e22e60738fed8a5961496188fe20cc8" not yet present on node "1-25-worker", loading...
> kubectl apply -k github.com/fluxcd/flagger//kustomize/istio
customresourcedefinition.apiextensions.k8s.io/alertproviders.flagger.app created
customresourcedefinition.apiextensions.k8s.io/canaries.flagger.app created
customresourcedefinition.apiextensions.k8s.io/metrictemplates.flagger.app created
serviceaccount/flagger created
clusterrole.rbac.authorization.k8s.io/flagger created
clusterrolebinding.rbac.authorization.k8s.io/flagger created
deployment.apps/flagger created
  • setup and install test workload (podinfo)
> kubectl create ns test
> kubectl label namespace test istio-injection=enabled
> kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main
  • Flagger pod info app:
    canary.yaml
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  progressDeadlineSeconds: 60
  service:
    port: 9898
    portDiscovery: true
    apex:
      annotations:
        test: "annotations-test"
      labels:
        test: "labels-test"
    headers:
      request:
        add:
          x-envoy-upstream-rq-timeout-ms: "15000"
          x-envoy-max-retries: "10"
          x-envoy-retry-on: "gateway-error,connect-failure,refused-stream"
  analysis:
    interval: 15s
    threshold: 15
    maxWeight: 30
    stepWeight: 10
    metrics:
    - name: latency
      templateRef:
        name: latency
        namespace: istio-system
      thresholdRange:
        max: 500
      interval: 1m
      templateVariables:
        reporter: destination
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          type: cmd
          cmd: "hey -z 10m -q 10 -c 2 http://podinfo.test:9898/"
          logCmdOutput: "true"
> kubectl apply -f canary.yaml
> stern flagger -n istio-system
...
flagger-55995f4fd5-pvpmg flagger {"level":"info","ts":"2024-03-15T14:25:36.982Z","caller":"router/istio.go:105","msg":"DestinationRule podinfo-canary.test created","canary":"podinfo.test"}
flagger-55995f4fd5-pvpmg flagger {"level":"info","ts":"2024-03-15T14:25:36.989Z","caller":"router/istio.go:105","msg":"DestinationRule podinfo-primary.test created","canary":"podinfo.test"}
flagger-55995f4fd5-pvpmg flagger {"level":"info","ts":"2024-03-15T14:25:36.997Z","caller":"router/istio.go:290","msg":"VirtualService podinfo.test created","canary":"podinfo.test"}
flagger-55995f4fd5-pvpmg flagger {"level":"info","ts":"2024-03-15T14:25:37.005Z","caller":"controller/events.go:33","msg":"Initialization done! podinfo.test","canary":"podinfo.test"}
> k get vs podinfo -o yaml | head
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    helm.toolkit.fluxcd.io/driftDetection: disabled
    kustomize.toolkit.fluxcd.io/reconcile: disabled
    test: annotations-test
  creationTimestamp: "2024-03-15T14:25:36Z"
  generation: 1
[...]
  spec:
    gateways:
    - mesh
    hosts:
    - podinfo
    http:
    - headers:
        request:
          add:
            x-envoy-max-retries: "10"
            x-envoy-retry-on: gateway-error,connect-failure,refused-stream
            x-envoy-upstream-rq-timeout-ms: "15000"
      route:
      - destination:
          host: podinfo-primary
        weight: 100
      - destination:
          host: podinfo-canary
        weight: 0
  • build and upgrade flagger to local version (the one that supports v1beta1)
> docker build -t test/flagger:latest .
> kind load docker-image test/flagger:latest --name 1.25
> kubectl -n istio-system set image deployment/flagger flagger=test/flagger:latest
> stern flagger -n istio-system
...
flagger-6d9694b694-kgz2v flagger {"level":"info","ts":"2024-03-15T15:03:01.649Z","caller":"controller/controller.go:186","msg":"Starting operator"}
flagger-6d9694b694-kgz2v flagger {"level":"info","ts":"2024-03-15T15:03:01.649Z","caller":"controller/controller.go:195","msg":"Started operator workers"}
flagger-6d9694b694-kgz2v flagger {"level":"info","ts":"2024-03-15T15:03:01.654Z","caller":"controller/controller.go:307","msg":"Synced test/podinfo"}
> k get vs podinfo -o yaml | head
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  annotations:
    helm.toolkit.fluxcd.io/driftDetection: disabled
    kustomize.toolkit.fluxcd.io/reconcile: disabled
    test: annotations-test
  creationTimestamp: "2024-03-15T14:25:36Z"
  generation: 1
[...]
  spec:
    gateways:
    - mesh
    hosts:
    - podinfo
    http:
    - headers:
        request:
          add:
            x-envoy-max-retries: "10"
            x-envoy-retry-on: gateway-error,connect-failure,refused-stream
            x-envoy-upstream-rq-timeout-ms: "15000"
      route:
      - destination:
          host: podinfo-primary
        weight: 100
      - destination:
          host: podinfo-canary
        weight: 0
> k get dr
NAME              HOST              AGE
podinfo-canary    podinfo-canary    2d18h
podinfo-primary   podinfo-primary   2d18h

Reconcile OK, with virtualservice not even updated :)

@benoitg31 benoitg31 marked this pull request as ready for review March 18, 2024 13:00
Copy link
Member

@stefanprodan stefanprodan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Thanks @benoitg31 🏅

@stefanprodan stefanprodan merged commit 0a616df into fluxcd:main Mar 26, 2024
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants