This lab will walk through the steps to install a cluster using the assisted installer agent entirely on prem.
- Open your web browser and navigate to https://{hostname from instructor}:6080
- Click on the "Connect" button underneath the noVNC logo
- Enter the password "supersecret"
- You will be presented with a RHEL 9 desktop that has firefox, and a termial that you can interact with your lab environment.
- Click through any new windows that appear
- Click the upper left of the screen button "Activities"
- Click on the terminal icon in the bottom center of the screen.
- Create the ignition iso that the "baremetal" vms will boot to install a cluster
- Review the install-config.yaml
- Execute
cat ./my-cluster/install-config.yaml
- Review the agent-config.yaml
- Execute
cat ./my-cluster/agent-config.yaml
- Execute
openshift-install agent create image --dir=./my-cluster
- Copy the iso to the vmhosts directory that the vms are preconfigured to first boot from
- Execute
cp my-cluster/agent.x86_64.iso /vmhosts/
- Click on the upper button "Activities"
- Click on the Firefox button on the bottom center of the screen
- Navigate the Firefox browser to http://127.0.0.1:9090
- Login to the console with username ec2-user and password supersecret
- Click on the limited access button to the left of help in the top center of the screen to grant adminstrative access
- Navigate on the left of the screen to the "virtual machines" tab
- Start all of the vms for the first cluster by clicking run next to the follow
- master0
- master1
- master2
- worker0
- worker1
- You will be able to monitor the master0 console by clicking on it from the virtual machines tab
- Switch back to the terminal
- We will first see the logs for the agent control service
- Execute
ssh core@192.168.122.2
- Execute
journalctl -f
- Execute
exit
- Setup the kubeconfig to access the cluster operators status by executing
export KUBECONFIG=/home/ec2-user/my-cluster/auth/kubeconfig
- See the status of the clusters
oc get clusteroperators
- Open the terminal and execute
cat ./my-cluster/auth/kubeadmin-password
this will be the kubeadmin password to access the console - Switch back to the firefox window and go to the url "https://console-openshift-console.apps.ocp.ocpbaremetal.com"
- Enter the kubeadmin username and password from the first step
- Login to the console
- Click on oeprators on the left and then operatorhub
- In the filter by keyword enter lvm storage
- Click on the "LVM Storage tile"
- Click on the Install button
- Scroll to the bottom leaving all the defaults and click on "Install"
- Click on Installed operators on the left tab
- Click on LVM Storage
- Click the "Create LVMCluster" button
- Click on "Create" button
- Click on the storage tab on the left
- Click on StorageClasses
- Click on lvms-vg1
- Click on 1 annotation
- Click on add more
- For the key enter "storageclass.kubernetes.io/is-default-class" and value "true"
- Login to the console
- Click on oeprators on the left and then operatorhub
- In the filter by keyword enter "advanced cluster management"
- Click on the "Advanced Cluster Management for Kuberenetes" tile
- Click the "Install" button in the top left
- Scroll down and leave all the defaults
- Click on the "Install" button
- Click on the "View installed Operators in Namespace open-cluster-management" link
- Click on "Advanced Cluster Management for Kubernetes"
- Click on "Create MultiClusterHub" button
- Leave the default details and click on the "Create" button
- Wait some time and you will eventually see a pop up with a link "Refresh web console" click on the link
- From the drop down at the top left of the screen click on local-cluster and select All clusters
- You will see a pop up "Red Hat Advanced Cluster Management for Kubernetes is not ready" wait a while and click dismiss
- Click on the "Infrastructure" tab on the left
- Click on "Host inventory"
- Click on the "configure host inventory settings" link toward the top right of the screen
- Keep the defaults of the pop up and click on the configure button
- You will see a "Configuration might take a few minutes". Wait until that goes away
- Eventually the "Create infrastructure environment" button will be available and click on it
- Enter a name for the infrastructure for the lab use default, and for location use default. For everything else leave the defaults
- For the pull secret open your terminal window and execute
cat /home/ec2-user/pullsecret.json
copy the output to the pull secret field in firefox. - For the ssh public key open your terminal window and execute
cat /home/ec2-user/.ssh/id_rsa.pub
copy the output to the ssh public key field. - Click the "Create" button
- Click the "Add hosts" button in the upper right of the screen
- Click the with discovery iso menu
- Click the copy button to the right of the command to download the iso section
- Switch to the terminal and execute the following commands
cd /vmhosts/
- Paste the copied command and add
--no-check-certificate
and run it
- Switch back to firefox and navigate to "https://localhost:9090"
- Click on the virtual machines on the left
- Click on worker-2
- Scroll down to the disks section and click eject on the cdrom row
- No click on the "insert" button
- Enter "/vmhosts/discovery.iso" in the custom path
- Click on insert scroll to the top and click on "run"
- Repeat steps 1-7 for worker-3
- Switch back to the openshift console
- From the menu in the upper left make sure "All clusters" is selected
- Expand the "Infrastrucure" band on the left
- Click on the "Host inventory" and then click on default
- Click on "Hosts" towards the top center of the screen under default
- Click on "Approve host" for worker2 and worker3
- Click on the "Clusters" tab
- Click on "Create cluster"
- Click the "Host Inventory" tile
- Click on the "Hosted" tile
- For the cluster name provide "test1"
- For the cluster set select default
- For the basename enter "ocpbaremetal.com"
- For the pull secret open your terminal window and execute
cat /home/ec2-user/pullsecret.json
copy the output to the pull secret field in firefox. - Click the "Next" button
- Change the "Controller availability policy" to "Single replica"
- Change the "Infrastructure availability policy" to "Single replica"
- Leave everything else as the default on this screen and click "Next"
- For the host address specify api.test1.ocpbaremetal.com
- Enter 31876 for the Host port
- For the ssh public key open your terminal window and execute
cat /home/ec2-user/.ssh/id_rsa.pub
copy the output to the ssh public key field. - Click on the "Next" button
- Click the "Create" button