With vCluster, you can run a virtual {kubernetes} cluster running entirely on pods within an actual {kubernetes} cluster. Unlike a typical {kubernetes} cluster, virtual clusters do not have their own node pools or networking. The workloads are scheduled inside the underlying host namespace.
Tip
|
You can also use vClusters to install {prod-short} in cases where {prod-short} typically cannot be installed. For example, when {prod-short} is to be installed in a specific version of {kubernetes} cluster that is not supported by the {prod-short} or when the cloud provider doesn’t support external OIDC, running {prod-short} within vCluster would help circumvent the issue. |
This section describe how to install {prod-short} on the virtual {kubernetes} cluster.
-
helm
: The package manager for {kubernetes}. See: Installing Helm. -
vcluster
: The command line tool for managing virtual {kubernetes} clusters. See Installing vCluster CLI. -
{orch-cli}
: The Kubernetes command-line tool. See Installing {orch-cli}. -
kubelogin
: The plugin forkubectl
also known askubectl oidc-login
. See Installing kubelogin. -
{prod-cli}
: The command line tool for {prod}. See: installing-the-chectl-management-tool.adoc. -
An active
{orch-cli}
session with administrative permissions to the destination {orch-name} cluster.
-
Define the cluster domain name.
DOMAIN_NAME=<KUBERNETES_CLUSTER_DOMAIN_NAME>
-
Install Ingress Controller. Check your {kubernetes} provider documentation how to install it.
TipUse the following command to install NGINX Ingress Controller on Azure Kubernetes Service cluster:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ --set controller.service.externalTrafficPolicy=Cluster
-
Install the cert-manager:
helm repo add jetstack https://charts.jetstack.io helm repo update helm install cert-manager jetstack/cert-manager \ --wait \ --create-namespace \ --namespace cert-manager \ --set installCRDs=true
-
Define the
Keycloak
host:KEYCLOAK_HOST=keycloak.${DOMAIN_NAME}
ImportantIf you use a registrar such as GoDaddy, you will need to add the following DNS record in your registrar and point it to the IP address of the ingress controller:
-
type:
A
-
name:
keycloak
TipUse the following command to figure out the external IP address of the NGINX Ingress Controller:
{orch-cli} get services ingress-nginx-controller \ --namespace ingress-nginx \ --output jsonpath="{.status.loadBalancer.ingress[0].ip}"
TipUse the following command to wait until
Keycloak
host is known:until ping -c1 ${KEYCLOAK_HOST} >/dev/null 2>&1; do :; done
-
-
Install Keycloak with a self-signed certificate:
{orch-cli} apply -f - <<EOF --- apiVersion: v1 kind: Namespace metadata: name: keycloak --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: keycloak-selfsigned namespace: keycloak labels: app: keycloak spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: keycloak-selfsigned namespace: keycloak labels: app: keycloak spec: isCA: true commonName: keycloak-selfsigned-ca privateKey: algorithm: ECDSA size: 256 issuerRef: name: keycloak-selfsigned kind: Issuer group: cert-manager.io secretName: ca.crt --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: ca: secretName: ca.crt --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: isCA: false commonName: keycloak dnsNames: - ${KEYCLOAK_HOST} privateKey: algorithm: RSA encoding: PKCS1 size: 4096 issuerRef: kind: Issuer name: keycloak group: cert-manager.io secretName: keycloak.tls subject: organizations: - Local Eclipse Che usages: - server auth - digital signature - key encipherment - key agreement - data encipherment --- apiVersion: v1 kind: Service metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: ports: - name: http port: 8080 targetPort: 8080 selector: app: keycloak type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak namespace: keycloak labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:24.0.2 args: ["start-dev"] env: - name: KEYCLOAK_ADMIN value: "admin" - name: KEYCLOAK_ADMIN_PASSWORD value: "admin" - name: KC_PROXY value: "edge" ports: - name: http containerPort: 8080 readinessProbe: httpGet: path: /realms/master port: 8080 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak namespace: keycloak annotations: nginx.ingress.kubernetes.io/proxy-connect-timeout: '3600' nginx.ingress.kubernetes.io/proxy-read-timeout: '3600' nginx.ingress.kubernetes.io/ssl-redirect: 'true' spec: ingressClassName: nginx tls: - hosts: - ${KEYCLOAK_HOST} secretName: keycloak.tls rules: - host: ${KEYCLOAK_HOST} http: paths: - path: / pathType: Prefix backend: service: name: keycloak port: number: 8080 EOF
-
Wait until the
Keycloak
pod is ready:{orch-cli} wait --for=condition=ready pod -l app=keycloak -n keycloak --timeout=120s
-
Configure
Keycloak
to createche
realm:{orch-cli} exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh create realms \ -s realm='che' \ -s displayName='Eclipse Che' \ -s enabled=true \ -s registrationAllowed=false \ -s resetPasswordAllowed=true"
-
Configure
Keycloak
to createche-public
client:{orch-cli} exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh create clients \ -r 'che' \ -s name=che-public \ -s clientId=che-public \ -s id=che-public \ -s redirectUris='[\"*\"]' \ -s webOrigins='[\"*\"]' \ -s attributes='{\"post.logout.redirect.uris\": \"*\", \"oidc.ciba.grant.enabled\" : \"false\", \"oauth2.device.authorization.grant.enabled\" : \"false\", \"backchannel.logout.session.required\" : \"true\", \"backchannel.logout.revoke.offline.tokens\" : \"false\"}' \ -s standardFlowEnabled=true \ -s publicClient=true \ -s frontchannelLogout=true \ -s directAccessGrantsEnabled=true && \ /opt/keycloak/bin/kcadm.sh create clients/che-public/protocol-mappers/models \ -r 'che' \ -s name=groups \ -s protocol=openid-connect \ -s protocolMapper=oidc-group-membership-mapper \ -s consentRequired=false \ -s config='{\"full.path\" : \"false\", \"introspection.token.claim\" : \"true\", \"userinfo.token.claim\" : \"true\", \"id.token.claim\" : \"true\", \"lightweight.claim\" : \"false\", \"access.token.claim\" : \"true\", \"claim.name\" : \"groups\"}'"
-
Configure
Keycloak
to createche
user and thevcluster
group:{orch-cli} exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh create users \ -r 'che' \ -s enabled=true \ -s username=che \ -s email=\"che@che\" \ -s emailVerified=true \ -s firstName=\"Eclipse\" \ -s lastName=\"Che\" && \ /opt/keycloak/bin/kcadm.sh set-password \ -r 'che' \ --username che \ --new-password che && \ /opt/keycloak/bin/kcadm.sh create groups \ -r 'che' \ -s name=vcluster"
-
Configure
Keycloak
to addche
user tovcluster
group:{orch-cli} exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ USER_ID=\$(/opt/keycloak/bin/kcadm.sh get users \ -r 'che' \ -q 'username=che' \ | sed -n 's|.*\"id\" : \"\(.*\)\",|\1|p') && \ GROUP_ID=\$(/opt/keycloak/bin/kcadm.sh get groups \ -r 'che' \ -q 'name=vcluster' \ | sed -n 's|.*\"id\" : \"\(.*\)\",|\1|p') && \ /opt/keycloak/bin/kcadm.sh update users/\$USER_ID/groups/\$GROUP_ID \ -r 'che'"
-
Configure
Keycloak
to createche-private
client:{orch-cli} exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh create clients \ -r 'che' \ -s name=che-private \ -s clientId=che-private \ -s id=che-private \ -s redirectUris='[\"*\"]' \ -s webOrigins='[\"*\"]' \ -s attributes='{\"post.logout.redirect.uris\": \"*\", \"oidc.ciba.grant.enabled\" : \"false\", \"oauth2.device.authorization.grant.enabled\" : \"false\", \"backchannel.logout.session.required\" : \"true\", \"backchannel.logout.revoke.offline.tokens\" : \"false\"}' \ -s standardFlowEnabled=true \ -s publicClient=false \ -s frontchannelLogout=true \ -s serviceAccountsEnabled=true \ -s directAccessGrantsEnabled=true && \ /opt/keycloak/bin/kcadm.sh create clients/che-private/protocol-mappers/models \ -r 'che' \ -s name=groups \ -s protocol=openid-connect \ -s protocolMapper=oidc-group-membership-mapper \ -s consentRequired=false \ -s config='{\"full.path\" : \"false\", \"introspection.token.claim\" : \"true\", \"userinfo.token.claim\" : \"true\", \"id.token.claim\" : \"true\", \"lightweight.claim\" : \"false\", \"access.token.claim\" : \"true\", \"claim.name\" : \"groups\"}' && \ /opt/keycloak/bin/kcadm.sh create clients/che-private/protocol-mappers/models \ -r 'che' \ -s name=audience \ -s protocol=openid-connect \ -s protocolMapper=oidc-audience-mapper \ -s config='{\"included.client.audience\" : \"che-public\", \"access.token.claim\" : \"true\", \"id.token.claim\" : \"true\"}'"
-
Print and save
che-private
client secret:{orch-cli} exec deploy/keycloak -n keycloak -- bash -c \ "/opt/keycloak/bin/kcadm.sh config credentials \ --server http://localhost:8080 \ --realm master \ --user admin \ --password admin && \ /opt/keycloak/bin/kcadm.sh get clients/che-private/client-secret \ -r che"
-
Prepare values for
vCluster
helm chart:cat > /tmp/vcluster-values.yaml << EOF api: image: registry.k8s.io/kube-apiserver:v1.27.1 extraArgs: - --oidc-issuer-url=https://${KEYCLOAK_HOST}/realms/che - --oidc-client-id=che-public - --oidc-username-claim=email - --oidc-groups-claim=groups - --oidc-ca-file=/tmp/certificates/keycloak-ca.crt init: manifestsTemplate: |- --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: oidc-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: Group name: vcluster service: type: LoadBalancer EOF
-
Install
vCluster
:helm repo add loft-sh https://charts.loft.sh helm repo update helm install vcluster loft-sh/vcluster-k8s \ --create-namespace \ --namespace vcluster \ --values /tmp/vcluster-values.yaml
-
Mount
Keycloak
CA certificate into thevcluster
pod:{orch-cli} get secret ca.crt \ --output "jsonpath={.data['ca\.crt']}" \ --namespace keycloak \ | base64 -d > /tmp/keycloak-ca.crt {orch-cli} create configmap keycloak-cert \ --from-file=keycloak-ca.crt=/tmp/keycloak-ca.crt \ --namespace vcluster {orch-cli} patch deployment vcluster -n vcluster --type json -p='[ { "op": "add", "path": "/spec/template/spec/volumes/-", "value": { "name": "keycloak-cert", "configMap": { "name": "keycloak-cert" } } }, { "op": "add", "path": "/spec/template/spec/containers/0/volumeMounts/-", "value": { "name": "keycloak-cert", "mountPath": "/tmp/certificates" } } ]'
-
Wait until
vc-vcluster
secret is created:timeout 120 bash -c 'while :; do {orch-cli} get secret vc-vcluster -n vcluster && break || sleep 5; done'
-
Verify the
vCluster
cluster status:vcluster list
-
Update
kubeconfig
file:{orch-cli} config set-credentials vcluster \ --exec-api-version=client.authentication.k8s.io/v1beta1 \ --exec-command=kubectl \ --exec-arg=\ oidc-login,\ get-token,\ --oidc-issuer-url=https://${KEYCLOAK_HOST}/realms/che,\ --certificate-authority=/tmp/keycloak-ca.crt,\ --oidc-client-id=che-public,\ --oidc-extra-scope="email offline_access profile openid" {orch-cli} get secret vc-vcluster -n vcluster -o jsonpath="{.data.certificate-authority}" | base64 -d > /tmp/vcluster-ca.crt {orch-cli} config set-cluster vcluster \ --server=https://$(kubectl get svc vcluster-lb \ --namespace vcluster \ --output jsonpath="{.status.loadBalancer.ingress[0].ip}"):443 \ --certificate-authority=/tmp/vcluster-ca.crt {orch-cli} config set-context vcluster \ --cluster=vcluster \ --user=vcluster
-
Use
vcluster
kubeconfig
context:{orch-cli} config use-context vcluster
-
View the pods in the cluster. By running the following command, you will be redirected to the authenticate page:
{orch-cli} get pods --all-namespaces
-
Verification
All pods in the running state are displayed.
-
Install Ingress Controller on the virtual {kubernetes} cluster.
TipUse the following command to install NGINX Ingress Controller on Azure Kubernetes Service cluster:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ --set controller.service.externalTrafficPolicy=Cluster
ImportantIf you use a registrar such as GoDaddy, you will need to add the following two DNS records in your registrar and point them to the IP address of the ingress controller:
-
type:
A
-
name:
@
and*
TipUse the following command to figure out the external IP address of the NGINX Ingress Controller:
{orch-cli} get services ingress-nginx-controller \ --namespace ingress-nginx \ --output jsonpath="{.status.loadBalancer.ingress[0].ip}"
TipUse the following command to wait until {kubernetes} host is known:
until ping -c1 ${DOMAIN_NAME} >/dev/null 2>&1; do :; done
-
-
Create
CheCluster
patch YAML file and replaceCHE_PRIVATE_CLIENT_SECRET
saved above:cat > /tmp/che-patch.yaml << EOF kind: CheCluster apiVersion: org.eclipse.che/v2 spec: networking: ingressClassName: nginx auth: oAuthClientName: che-private oAuthSecret: CHE_PRIVATE_CLIENT_SECRET identityProviderURL: https://$KEYCLOAK_HOST/realms/che gateway: oAuthProxy: cookieExpireSeconds: 300 components: cheServer: extraProperties: CHE_OIDC_USERNAME__CLAIM: email EOF
-
Create
{prod-namespace}
namespace:{orch-cli} create namespace {prod-namespace}
-
Copy
Keycloak
CA certificate into the{prod-namespace}
namespace:{orch-cli} create configmap keycloak-certs \ --from-file=keycloak-ca.crt=/tmp/keycloak-ca.crt \ --namespace {prod-namespace} {orch-cli} label configmap keycloak-certs \ app.kubernetes.io/part-of=che.eclipse.org \ app.kubernetes.io/component=ca-bundle \ --namespace {prod-namespace}
-
Deploy {prod-short}:
{prod-cli} server:deploy \ --platform k8s \ --domain $DOMAIN_NAME \ --che-operator-cr-patch-yaml /tmp/che-patch.yaml
-
Verify the {prod-short} instance status:
$ {prod-cli} server:status
-
Navigate to the {prod-short} cluster instance:
$ {prod-cli} dashboard:open
-
Log in to the {prod-short} instance with Username:
che
and Password:che
.