Skip to content

Latest commit

 

History

History
141 lines (128 loc) · 6.92 KB

lab_directions.md

File metadata and controls

141 lines (128 loc) · 6.92 KB

Install OCP with Assisted Installer

This lab will walk through the steps to install a cluster using the assisted installer agent entirely on prem.

Steps

Create the agent boot iso

  1. Open your web browser and navigate to https://{hostname from instructor}:6080
  2. Click on the "Connect" button underneath the noVNC logo
  3. Enter the password "supersecret"
  4. You will be presented with a RHEL 9 desktop that has firefox, and a termial that you can interact with your lab environment.
  5. Click through any new windows that appear
  6. Click the upper left of the screen button "Activities"
  7. Click on the terminal icon in the bottom center of the screen.
  8. Create the ignition iso that the "baremetal" vms will boot to install a cluster
  9. Review the install-config.yaml
  10. Execute cat ./my-cluster/install-config.yaml
  11. Review the agent-config.yaml
  12. Execute cat ./my-cluster/agent-config.yaml
  13. Execute openshift-install agent create image --dir=./my-cluster
  14. Copy the iso to the vmhosts directory that the vms are preconfigured to first boot from
  15. Execute cp my-cluster/agent.x86_64.iso /vmhosts/

Install the Cluster

  1. Click on the upper button "Activities"
  2. Click on the Firefox button on the bottom center of the screen
  3. Navigate the Firefox browser to http://127.0.0.1:9090
  4. Login to the console with username ec2-user and password supersecret
  5. Click on the limited access button to the left of help in the top center of the screen to grant adminstrative access
  6. Navigate on the left of the screen to the "virtual machines" tab
  7. Start all of the vms for the first cluster by clicking run next to the follow
    1. master0
    2. master1
    3. master2
    4. worker0
    5. worker1

Track the Install

UI

  1. You will be able to monitor the master0 console by clicking on it from the virtual machines tab

Console

  1. Switch back to the terminal
  2. We will first see the logs for the agent control service
  3. Execute ssh core@192.168.122.2
  4. Execute journalctl -f
  5. Execute exit
  6. Setup the kubeconfig to access the cluster operators status by executing export KUBECONFIG=/home/ec2-user/my-cluster/auth/kubeconfig
  7. See the status of the clusters oc get clusteroperators

Cluster Access

  1. Open the terminal and execute cat ./my-cluster/auth/kubeadmin-password this will be the kubeadmin password to access the console
  2. Switch back to the firefox window and go to the url "https://console-openshift-console.apps.ocp.ocpbaremetal.com"
  3. Enter the kubeadmin username and password from the first step

Install Local Storage

  1. Login to the console
  2. Click on oeprators on the left and then operatorhub
  3. In the filter by keyword enter lvm storage
  4. Click on the "LVM Storage tile"
  5. Click on the Install button
  6. Scroll to the bottom leaving all the defaults and click on "Install"
  7. Click on Installed operators on the left tab
  8. Click on LVM Storage
  9. Click the "Create LVMCluster" button
  10. Click on "Create" button
  11. Click on the storage tab on the left
  12. Click on StorageClasses
  13. Click on lvms-vg1
  14. Click on 1 annotation
  15. Click on add more
  16. For the key enter "storageclass.kubernetes.io/is-default-class" and value "true"

Install Hosted Control Planes

  1. Login to the console
  2. Click on oeprators on the left and then operatorhub
  3. In the filter by keyword enter "advanced cluster management"
  4. Click on the "Advanced Cluster Management for Kuberenetes" tile
  5. Click the "Install" button in the top left
  6. Scroll down and leave all the defaults
  7. Click on the "Install" button
  8. Click on the "View installed Operators in Namespace open-cluster-management" link
  9. Click on "Advanced Cluster Management for Kubernetes"
  10. Click on "Create MultiClusterHub" button
  11. Leave the default details and click on the "Create" button
  12. Wait some time and you will eventually see a pop up with a link "Refresh web console" click on the link
  13. From the drop down at the top left of the screen click on local-cluster and select All clusters
  14. You will see a pop up "Red Hat Advanced Cluster Management for Kubernetes is not ready" wait a while and click dismiss

Configure Host Inventory

  1. Click on the "Infrastructure" tab on the left
  2. Click on "Host inventory"
  3. Click on the "configure host inventory settings" link toward the top right of the screen
  4. Keep the defaults of the pop up and click on the configure button
  5. You will see a "Configuration might take a few minutes". Wait until that goes away
  6. Eventually the "Create infrastructure environment" button will be available and click on it
  7. Enter a name for the infrastructure for the lab use default, and for location use default. For everything else leave the defaults
  8. For the pull secret open your terminal window and execute cat /home/ec2-user/pullsecret.json copy the output to the pull secret field in firefox.
  9. For the ssh public key open your terminal window and execute cat /home/ec2-user/.ssh/id_rsa.pub copy the output to the ssh public key field.
  10. Click the "Create" button
  11. Click the "Add hosts" button in the upper right of the screen
  12. Click the with discovery iso menu
  13. Click the copy button to the right of the command to download the iso section
  14. Switch to the terminal and execute the following commands
    1. cd /vmhosts/
    2. Paste the copied command and add --no-check-certificate and run it

Create an HPC

  1. Switch back to firefox and navigate to "https://localhost:9090"
  2. Click on the virtual machines on the left
  3. Click on worker-2
  4. Scroll down to the disks section and click eject on the cdrom row
  5. No click on the "insert" button
  6. Enter "/vmhosts/discovery.iso" in the custom path
  7. Click on insert scroll to the top and click on "run"
  8. Repeat steps 1-7 for worker-3
  9. Switch back to the openshift console
  10. From the menu in the upper left make sure "All clusters" is selected
  11. Expand the "Infrastrucure" band on the left
  12. Click on the "Host inventory" and then click on default
  13. Click on "Hosts" towards the top center of the screen under default
  14. Click on "Approve host" for worker2 and worker3
  15. Click on the "Clusters" tab
  16. Click on "Create cluster"
  17. Click the "Host Inventory" tile
  18. Click on the "Hosted" tile
  19. For the cluster name provide "test1"
  20. For the cluster set select default
  21. For the basename enter "ocpbaremetal.com"
  22. For the pull secret open your terminal window and execute cat /home/ec2-user/pullsecret.json copy the output to the pull secret field in firefox.
  23. Click the "Next" button
  24. Change the "Controller availability policy" to "Single replica"
  25. Change the "Infrastructure availability policy" to "Single replica"
  26. Leave everything else as the default on this screen and click "Next"
  27. For the host address specify api.test1.ocpbaremetal.com
  28. Enter 31876 for the Host port
  29. For the ssh public key open your terminal window and execute cat /home/ec2-user/.ssh/id_rsa.pub copy the output to the ssh public key field.
  30. Click on the "Next" button
  31. Click the "Create" button