- Applicable sections of this readme are:
- OVA credentials
- oneadmin credentials
- minimum system requirements
- opennebula vm ubuntu 20.04 template instance credentials
- usage notes (for blue team)
- username:
root
- password:
ccdc
- username:
oneadmin
- password:
93762290fadc4665338878b8fee76d5c
- 32GB available disk
- 7GB available RAM
- username:
root
- password:
ccdc
- If not running OSX, ensure your bios has the virtualization cpu features enabled
- If running windows 10
- download and install the intel processor identification utility
- https://downloadcenter.intel.com/download/28539
- if vt-d disabled
- gpedit
- computer configuration -> administrative templates -> system -> device guard
- modify turn on virtualization based security to disabled (document the original state)
- computer configuration -> administrative templates -> system -> device guard
- bcdedit /set hypervisorlaunchtype off
- reboot
- NOTE: IF YOU DO THIS, REMEMBER TO REVERT THE CHANGE AFTER THE EXERCISE
- to restore
- gpedit
- computer configuration -> administrative templates -> system -> device guard
- modify turn on virtualization based security to not configured (or the original state)
- computer configuration -> administrative templates -> system -> device guard
- bcdedit /set hypervisorlaunchtype auto
- gpedit
- to restore
- gpedit
- download and install the intel processor identification utility
- If running windows 10
- Create a vm
opennebula-frontend
- 1 vcpu
- 2GB RAM
- 12GB Disk
- configure networking for the
opennebula-frontend
vm- select settings -> network -> adapter 1
- set to NAT
- select adapter 2
- set to internal network with name:
swccdc-warmup2020-internal
- set to internal network with name:
- select settings -> network -> adapter 1
- attach the centos 7 iso to the vm
- Create a vm
opennebula-hypervisor
- 4 vcpu
- 4GB RAM
- 20GB Disk
- configure networking for the
opennebula-hypervisor
vm- select settings -> network -> adapter 1
- set to NAT
- select adapter 2
- set to internal network with name:
swccdc-warmup2020-internal
- set to internal network with name:
- attach the centos 7 iso to the vm
- select settings -> network -> adapter 1
- install centos 7 on
opennebula-frontend
- hostname:
opennebula-frontend
- configure enp0s3 to be enabled and use dhcp (default)
- configure enp0s8 to be 10.235.59.1/24 (no gateway or dns)
- hostname:
- install centos 7 on
opennebula-hypervisor
- hostname:
opennebula-hypervisor
- configure enp0s3 to be enabled and use dhcp (default)
- configure enp0s8 to be 10.235.59.2/24 (no gateway or dns)
- hostname:
- after install power off both VMs
- if you're running windows, execute
cd "C:\Program Files\Oracle\VirtualBox"
to bring VBoxManage into execution scope - execute `VBoxManage modifyvm "opennebula-frontend" --natpf1 "sunstone,tcp,127.0.0.1,9869,,9869"
- execute
VBoxManage modifyvm "opennebula-frontend" --natpf1 "sunstone-rpc,tcp,127.0.0.1,2633,,2633"
- execute
VBoxManage modifyvm "opennebula-frontend" --natpf1 "sunstone-ssh,tcp,127.0.0.1,8022,,22"
- execute
VBoxManage modifyvm "opennebula-frontend" --natpf1 "sunstone-vnc,tcp,127.0.0.1,29876,,29876"
- execute
VBoxManage modifyvm "opennebula-hypervisor" --natpf1 "hypervisor-ssh,tcp,127.0.0.1,9022,,22"
- execute
VBoxManage modifyvm "opennebula-hypervisor" --natpf1 "router-ssh,tcp,127.0.0.1,10022,10.0.2.200,22"
- execute
VBoxManage modifyvm "opennebula-hypervisor" --nested-hw-virt on
- execute
VBoxManage modifyvm "opennebula-hypervisor" --cpus 4
- boot
opennebula-frontend
and login- execute
echo "10.235.59.1 opennebula-frontend" >> /etc/hosts
- execute
echo "10.235.59.2 opennebula-hypervisor" >> /etc/hosts
- execute
- boot
opennebula-hypervisor
and login- use nmtui to create a bridge interface named br0 with enp0s3 set as a slave
- set to link-local for ipv4 and ipv6
- execute
hostnamectl set-hostname opennebula-hypervisor
- execute
echo "10.235.59.1 opennebula-frontend" >> /etc/hosts
- execute
echo "10.235.59.2 opennebula-hypervisor" >> /etc/hosts
- use nmtui to create a bridge interface named br0 with enp0s3 set as a slave
- from your provisioning shell, run ansible
ansible-playbook -i hosts -u root --ask-pass deploy-opennebula.yml
- login to the frontend and hypervisor hosts and execute the following on each
su oneadmin
ssh opennebula-frontend
ssh opennebula-hypervisor
exit
- open a browser and browse to http://127.0.0.1:8996
- create a network named
external-network
- bridge:
br0
- network mode:
bridged
- physical interface:
enp0s3
- first address:
10.0.2.200
- size:
10
- network address:
10.0.2.0
- network mask:
255.255.255.0
- gateway:
10.0.2.2
- dns:
10.0.0.2
- mtu of guests:
1500
- bridge:
- create a network named
internal-network
- network mode:
vxlan
- physical device:
enp0s8
- first address:
172.16.4.1
- size:
100
- network address:
172.16.4.0
- network mask:
255.255.255.0
- mtu of guests:
1000
- network mode:
- create a network named
- If not running OSX, ensure your bios has the virtualization cpu features enabled
- If running windows 10
- download and install the intel processor identification utility
- https://downloadcenter.intel.com/download/28539
- if vt-d disabled
- gpedit
- computer configuration -> administrative templates -> system -> device guard
- modify turn on virtualization based security to disabled (document the original state)
- computer configuration -> administrative templates -> system -> device guard
- bcdedit /set hypervisorlaunchtype off
- reboot
- NOTE: IF YOU DO THIS, REMEMBER TO REVERT THE CHANGE AFTER THE EXERCISE
- to restore
- gpedit
- computer configuration -> administrative templates -> system -> device guard
- modify turn on virtualization based security to not configured (or the original state)
- computer configuration -> administrative templates -> system -> device guard
- bcdedit /set hypervisorlaunchtype auto
- gpedit
- to restore
- gpedit
- download and install the intel processor identification utility
- If running windows 10
- download and install VirtualBox
- download the ovas (https://drive.google.com/file/d/1SWjL0rmpoARVceT5FUEHkjL1C2ESI5ph/view?usp=sharing)
- import all ovas into virtual box
- select the opennebula-frontend
- select settings -> network -> adapter 1
- set to NAT
- select adapter 2
- set to internal network with name:
swccdc-warmup2020-internal
- set to internal network with name:
- select settings -> network -> adapter 1
- execute
VBoxManage modifyvm "opennebula-frontend" --natpf1 "sunstone,tcp,127.0.0.1,9869,,9869"
- execute
VBoxManage modifyvm "opennebula-frontend" --natpf1 "sunstone-rpc,tcp,127.0.0.1,2633,,2633"
- execute
VBoxManage modifyvm "opennebula-frontend" --natpf1 "sunstone-ssh,tcp,127.0.0.1,8022,,22"
- execute
VBoxManage modifyvm "opennebula-hypervisor" --natpf1 "hypervisor-ssh,tcp,127.0.0.1,9022,,22"
- execute
VBoxManage modifyvm "opennebula-hypervisor" --natpf1 "router-ssh,tcp,127.0.0.1,10022,10.0.2.200,22"
- boot
opennebula-frontend
- boot
opennebula-hypervisor
- open a browser and browse to http://127.0.0.1:8996 and login using the oneadmin credentials located at the top of this document.
- register your computer's ssh public key with opennebula
- to do this select your user name toward the top right of the browser window then select settings
- select the auth button
- select the edit button in the "Public SSH Key" section of the page
- paste your public key
- all future VMs provisioned by your user will allow ssh authentication for
root
using your public key
- create a virtual router by creating a new vm from the
Service VNF
template namedtest-router
- enable dns server
- listen on eth1
- enable nat
- outgoing interface on eth0
- enable router
- router interface eth0,eth1
- eth0 attached to
external-network
- eth1 attached to
internal-network
- enable dns server
- at this point you should have a VM named
test-router
- it should have 2 ip addresses
- 10.0.2.200
- 172.16.4.1
- from your vm host computer you should be able to use your public key and ssh to the router with the following command:
ssh -p 10022 root@127.0.0.1
- it should have 2 ip addresses
- create an ubuntu 20.04 instance
- eth0 attached to
internal-network
- eth0 attached to
- open a vnc for the ubuntu instance. It should have an address of
172.16.4.2
- check connectivity by pinging google.com
- delete the ubuntu 20.04 instance you just created and tested
- create a new instance with the exact same configuration parameters using terraform and the opennebula provider
- delete all instances
- register your computer's ssh public key with opennebula