Skip to content

Latest commit

 

History

History
684 lines (633 loc) · 19.3 KB

installing-che-on-the-virtual-kubernetes-cluster.adoc

File metadata and controls

684 lines (633 loc) · 19.3 KB

Installing {prod-short} on the virtual {kubernetes} cluster

With vCluster, you can run a virtual {kubernetes} cluster running entirely on pods within an actual {kubernetes} cluster. Unlike a typical {kubernetes} cluster, virtual clusters do not have their own node pools or networking. The workloads are scheduled inside the underlying host namespace.

Tip
You can also use vClusters to install {prod-short} in cases where {prod-short} typically cannot be installed. For example, when {prod-short} is to be installed in a specific version of {kubernetes} cluster that is not supported by the {prod-short} or when the cloud provider doesn’t support external OIDC, running {prod-short} within vCluster would help circumvent the issue.

This section describe how to install {prod-short} on the virtual {kubernetes} cluster.

Prerequisites
Procedure
  1. Define the cluster domain name.

    DOMAIN_NAME=<KUBERNETES_CLUSTER_DOMAIN_NAME>
  2. Install Ingress Controller. Check your {kubernetes} provider documentation how to install it.

    Tip

    Use the following command to install NGINX Ingress Controller on Azure Kubernetes Service cluster:

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    
    helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
        --namespace ingress-nginx \
        --create-namespace \
        --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
        --set controller.service.externalTrafficPolicy=Cluster
  3. Install the cert-manager:

    helm repo add jetstack https://charts.jetstack.io
    helm repo update
    
    helm install cert-manager jetstack/cert-manager \
      --wait \
      --create-namespace \
      --namespace cert-manager \
      --set installCRDs=true
  4. Define the Keycloak host:

    KEYCLOAK_HOST=keycloak.${DOMAIN_NAME}
    Important

    If you use a registrar such as GoDaddy, you will need to add the following DNS record in your registrar and point it to the IP address of the ingress controller:

    • type: A

    • name: keycloak

    Tip

    Use the following command to figure out the external IP address of the NGINX Ingress Controller:

    {orch-cli} get services ingress-nginx-controller \
      --namespace ingress-nginx \
      --output jsonpath="{.status.loadBalancer.ingress[0].ip}"
    Tip

    Use the following command to wait until Keycloak host is known:

    until ping -c1 ${KEYCLOAK_HOST} >/dev/null 2>&1; do :; done
  5. Install Keycloak with a self-signed certificate:

    {orch-cli} apply -f - <<EOF
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: keycloak
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: keycloak-selfsigned
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: keycloak-selfsigned
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      isCA: true
      commonName: keycloak-selfsigned-ca
      privateKey:
        algorithm: ECDSA
        size: 256
      issuerRef:
        name: keycloak-selfsigned
        kind: Issuer
        group: cert-manager.io
      secretName: ca.crt
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: keycloak
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      ca:
        secretName: ca.crt
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: keycloak
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      isCA: false
      commonName: keycloak
      dnsNames:
        - ${KEYCLOAK_HOST}
      privateKey:
        algorithm: RSA
        encoding: PKCS1
        size: 4096
      issuerRef:
        kind: Issuer
        name: keycloak
        group: cert-manager.io
      secretName: keycloak.tls
      subject:
        organizations:
          - Local Eclipse Che
      usages:
        - server auth
        - digital signature
        - key encipherment
        - key agreement
        - data encipherment
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: keycloak
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      ports:
      - name: http
        port: 8080
        targetPort: 8080
      selector:
        app: keycloak
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: keycloak
      namespace: keycloak
      labels:
        app: keycloak
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: keycloak
      template:
        metadata:
          labels:
            app: keycloak
        spec:
          containers:
          - name: keycloak
            image: quay.io/keycloak/keycloak:24.0.2
            args: ["start-dev"]
            env:
            - name: KEYCLOAK_ADMIN
              value: "admin"
            - name: KEYCLOAK_ADMIN_PASSWORD
              value: "admin"
            - name: KC_PROXY
              value: "edge"
            ports:
            - name: http
              containerPort: 8080
            readinessProbe:
              httpGet:
                path: /realms/master
                port: 8080
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: keycloak
      namespace: keycloak
      annotations:
        nginx.ingress.kubernetes.io/proxy-connect-timeout: '3600'
        nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
        nginx.ingress.kubernetes.io/ssl-redirect: 'true'
    spec:
      ingressClassName: nginx
      tls:
        - hosts:
            - ${KEYCLOAK_HOST}
          secretName: keycloak.tls
      rules:
      - host: ${KEYCLOAK_HOST}
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: keycloak
                port:
                  number: 8080
    EOF
  6. Wait until the Keycloak pod is ready:

    {orch-cli} wait --for=condition=ready pod -l app=keycloak -n keycloak --timeout=120s
  7. Configure Keycloak to create che realm:

    {orch-cli} exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh create realms \
            -s realm='che' \
            -s displayName='Eclipse Che' \
            -s enabled=true \
            -s registrationAllowed=false \
            -s resetPasswordAllowed=true"
  8. Configure Keycloak to create che-public client:

    {orch-cli} exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh create clients \
            -r 'che' \
            -s name=che-public \
            -s clientId=che-public \
            -s id=che-public \
            -s redirectUris='[\"*\"]' \
            -s webOrigins='[\"*\"]' \
            -s attributes='{\"post.logout.redirect.uris\": \"*\", \"oidc.ciba.grant.enabled\" : \"false\", \"oauth2.device.authorization.grant.enabled\" : \"false\", \"backchannel.logout.session.required\" : \"true\", \"backchannel.logout.revoke.offline.tokens\" : \"false\"}' \
            -s standardFlowEnabled=true \
            -s publicClient=true \
            -s frontchannelLogout=true \
            -s directAccessGrantsEnabled=true && \
        /opt/keycloak/bin/kcadm.sh create clients/che-public/protocol-mappers/models \
            -r 'che' \
            -s name=groups \
            -s protocol=openid-connect \
            -s protocolMapper=oidc-group-membership-mapper \
            -s consentRequired=false \
            -s config='{\"full.path\" : \"false\", \"introspection.token.claim\" : \"true\", \"userinfo.token.claim\" : \"true\", \"id.token.claim\" : \"true\", \"lightweight.claim\" : \"false\", \"access.token.claim\" : \"true\", \"claim.name\" : \"groups\"}'"
  9. Configure Keycloak to create che user and the vcluster group:

    {orch-cli} exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh create users \
            -r 'che' \
            -s enabled=true \
            -s username=che \
            -s email=\"che@che\" \
            -s emailVerified=true \
            -s firstName=\"Eclipse\" \
            -s lastName=\"Che\" && \
        /opt/keycloak/bin/kcadm.sh set-password \
            -r 'che' \
            --username che \
            --new-password che && \
        /opt/keycloak/bin/kcadm.sh create groups \
            -r 'che' \
            -s name=vcluster"
  10. Configure Keycloak to add che user to vcluster group:

    {orch-cli} exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        USER_ID=\$(/opt/keycloak/bin/kcadm.sh get users \
            -r 'che' \
            -q 'username=che' \
                    |  sed -n 's|.*\"id\" : \"\(.*\)\",|\1|p') && \
        GROUP_ID=\$(/opt/keycloak/bin/kcadm.sh get groups \
            -r 'che' \
            -q 'name=vcluster' \
                    |  sed -n 's|.*\"id\" : \"\(.*\)\",|\1|p') && \
        /opt/keycloak/bin/kcadm.sh update users/\$USER_ID/groups/\$GROUP_ID \
            -r 'che'"
  11. Configure Keycloak to create che-private client:

    {orch-cli} exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh create clients \
            -r 'che' \
            -s name=che-private \
            -s clientId=che-private \
            -s id=che-private \
            -s redirectUris='[\"*\"]' \
            -s webOrigins='[\"*\"]' \
            -s attributes='{\"post.logout.redirect.uris\": \"*\", \"oidc.ciba.grant.enabled\" : \"false\", \"oauth2.device.authorization.grant.enabled\" : \"false\", \"backchannel.logout.session.required\" : \"true\", \"backchannel.logout.revoke.offline.tokens\" : \"false\"}' \
            -s standardFlowEnabled=true \
            -s publicClient=false \
            -s frontchannelLogout=true \
            -s serviceAccountsEnabled=true \
            -s directAccessGrantsEnabled=true && \
        /opt/keycloak/bin/kcadm.sh create clients/che-private/protocol-mappers/models \
            -r 'che' \
            -s name=groups \
            -s protocol=openid-connect \
            -s protocolMapper=oidc-group-membership-mapper \
            -s consentRequired=false \
            -s config='{\"full.path\" : \"false\", \"introspection.token.claim\" : \"true\", \"userinfo.token.claim\" : \"true\", \"id.token.claim\" : \"true\", \"lightweight.claim\" : \"false\", \"access.token.claim\" : \"true\", \"claim.name\" : \"groups\"}' && \
        /opt/keycloak/bin/kcadm.sh create clients/che-private/protocol-mappers/models \
            -r 'che' \
            -s name=audience \
            -s protocol=openid-connect \
            -s protocolMapper=oidc-audience-mapper \
            -s config='{\"included.client.audience\" : \"che-public\", \"access.token.claim\" : \"true\", \"id.token.claim\" : \"true\"}'"
  12. Print and save che-private client secret:

    {orch-cli} exec deploy/keycloak -n keycloak -- bash -c \
        "/opt/keycloak/bin/kcadm.sh config credentials \
            --server http://localhost:8080 \
            --realm master \
            --user admin  \
            --password admin && \
        /opt/keycloak/bin/kcadm.sh get clients/che-private/client-secret \
            -r che"
  13. Prepare values for vCluster helm chart:

    cat > /tmp/vcluster-values.yaml << EOF
    api:
      image: registry.k8s.io/kube-apiserver:v1.27.1
      extraArgs:
        - --oidc-issuer-url=https://${KEYCLOAK_HOST}/realms/che
        - --oidc-client-id=che-public
        - --oidc-username-claim=email
        - --oidc-groups-claim=groups
        - --oidc-ca-file=/tmp/certificates/keycloak-ca.crt
    
    init:
      manifestsTemplate: |-
        ---
        kind: ClusterRoleBinding
        apiVersion: rbac.authorization.k8s.io/v1
        metadata:
          name: oidc-cluster-admin
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: cluster-admin
        subjects:
        - kind: Group
          name: vcluster
    service:
      type: LoadBalancer
    EOF
  14. Install vCluster:

    helm repo add loft-sh https://charts.loft.sh
    helm repo update
    
    helm install vcluster loft-sh/vcluster-k8s \
      --create-namespace \
      --namespace vcluster \
      --values /tmp/vcluster-values.yaml
  15. Mount Keycloak CA certificate into the vcluster pod:

    {orch-cli} get secret ca.crt \
        --output "jsonpath={.data['ca\.crt']}" \
        --namespace keycloak \
          | base64 -d > /tmp/keycloak-ca.crt
    
    {orch-cli} create configmap keycloak-cert \
        --from-file=keycloak-ca.crt=/tmp/keycloak-ca.crt \
        --namespace vcluster
    
    {orch-cli} patch deployment vcluster -n vcluster --type json -p='[
      {
        "op": "add",
        "path": "/spec/template/spec/volumes/-",
        "value": {
          "name": "keycloak-cert",
          "configMap": {
            "name": "keycloak-cert"
          }
        }
      },
      {
        "op": "add",
        "path": "/spec/template/spec/containers/0/volumeMounts/-",
        "value": {
          "name": "keycloak-cert",
          "mountPath": "/tmp/certificates"
        }
      }
    ]'
  16. Wait until vc-vcluster secret is created:

    timeout 120 bash -c 'while :; do {orch-cli} get secret vc-vcluster -n vcluster && break || sleep 5; done'
  17. Verify the vCluster cluster status:

    vcluster list
  18. Update kubeconfig file:

    {orch-cli} config set-credentials vcluster \
        --exec-api-version=client.authentication.k8s.io/v1beta1 \
        --exec-command=kubectl \
        --exec-arg=\
    oidc-login,\
    get-token,\
    --oidc-issuer-url=https://${KEYCLOAK_HOST}/realms/che,\
    --certificate-authority=/tmp/keycloak-ca.crt,\
    --oidc-client-id=che-public,\
    --oidc-extra-scope="email offline_access profile openid"
    
    {orch-cli} get secret vc-vcluster -n vcluster -o jsonpath="{.data.certificate-authority}" | base64 -d > /tmp/vcluster-ca.crt
    {orch-cli} config set-cluster vcluster \
        --server=https://$(kubectl get svc vcluster-lb \
                        --namespace vcluster \
                        --output jsonpath="{.status.loadBalancer.ingress[0].ip}"):443 \
        --certificate-authority=/tmp/vcluster-ca.crt
    
    {orch-cli} config set-context vcluster \
        --cluster=vcluster \
        --user=vcluster
  19. Use vcluster kubeconfig context:

    {orch-cli} config use-context vcluster
  20. View the pods in the cluster. By running the following command, you will be redirected to the authenticate page:

    {orch-cli} get pods --all-namespaces
  21. Verification

    All pods in the running state are displayed.

  22. Install Ingress Controller on the virtual {kubernetes} cluster.

    Tip

    Use the following command to install NGINX Ingress Controller on Azure Kubernetes Service cluster:

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update
    
    helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
        --namespace ingress-nginx \
        --create-namespace \
        --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
        --set controller.service.externalTrafficPolicy=Cluster
    Important

    If you use a registrar such as GoDaddy, you will need to add the following two DNS records in your registrar and point them to the IP address of the ingress controller:

    • type: A

    • name: @ and *

    Tip

    Use the following command to figure out the external IP address of the NGINX Ingress Controller:

    {orch-cli} get services ingress-nginx-controller \
    --namespace ingress-nginx \
    --output jsonpath="{.status.loadBalancer.ingress[0].ip}"
    Tip

    Use the following command to wait until {kubernetes} host is known:

    until ping -c1 ${DOMAIN_NAME} >/dev/null 2>&1; do :; done
  23. Create CheCluster patch YAML file and replace CHE_PRIVATE_CLIENT_SECRET saved above:

    cat > /tmp/che-patch.yaml << EOF
    kind: CheCluster
    apiVersion: org.eclipse.che/v2
    spec:
      networking:
        ingressClassName: nginx
        auth:
          oAuthClientName: che-private
          oAuthSecret: CHE_PRIVATE_CLIENT_SECRET
          identityProviderURL: https://$KEYCLOAK_HOST/realms/che
          gateway:
            oAuthProxy:
              cookieExpireSeconds: 300
      components:
        cheServer:
          extraProperties:
            CHE_OIDC_USERNAME__CLAIM: email
    EOF
  24. Create {prod-namespace} namespace:

    {orch-cli} create namespace {prod-namespace}
  25. Copy Keycloak CA certificate into the {prod-namespace} namespace:

    {orch-cli} create configmap keycloak-certs \
            --from-file=keycloak-ca.crt=/tmp/keycloak-ca.crt \
            --namespace {prod-namespace}
    
    {orch-cli} label configmap keycloak-certs \
            app.kubernetes.io/part-of=che.eclipse.org \
            app.kubernetes.io/component=ca-bundle \
            --namespace {prod-namespace}
  26. Deploy {prod-short}:

    {prod-cli} server:deploy \
            --platform k8s \
            --domain $DOMAIN_NAME \
            --che-operator-cr-patch-yaml /tmp/che-patch.yaml
Verification steps
  1. Verify the {prod-short} instance status:

    $ {prod-cli} server:status
  2. Navigate to the {prod-short} cluster instance:

    $ {prod-cli} dashboard:open
  3. Log in to the {prod-short} instance with Username: che and Password: che.