Skip to content

achetronic/metal-cloud

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Metal Cloud

Description

Terraform module to provision VMs (and their resources, included networks) on a bare metal Linux, using Terraform on top of opensource tools such as Libvirt, QEMU and KVM.

Requirements

  • OpenSSH vx.x.x
  • Terraform v1.2.5+

How to use

All the following commands are executed from the root path of the repository

1. Declare environment vars with the SSH connection parameters.

These can be declared as input vars inside a .tfvars file too

export TF_VAR_SSH_HOST="XXX.XXX.XXX.XXX"
export TF_VAR_SSH_USERNAME="yourUsername"
export TF_VAR_SSH_PASSWORD="yourPassword"
export TF_VAR_SSH_KEY_PATH="~/.ssh/id_ed25519"

2. Install some REQUIRED dependencies in local machine

To build the ISOs we need to have installed mkisofs.

sudo apt install mkisofs

3. Execute some REQUIRED previous scripts to bootstrap the host

By the moment, only recent Ubuntu versions are supported. Feel free to extend the OS support by pushing your code to this repository.

# Copy current SSH key into the target host
echo ${TF_VAR_SSH_PASSWORD} | ssh-copy-id -f ${TF_VAR_SSH_USERNAME}@${TF_VAR_SSH_HOST}

# Give execution permissions to the helper scripts
chmod -R +x ./scripts

# Connect to the host machine by SSH and install the dependencies using passwordless authentication 
# (user with sudo privileges required) 
scp ./scripts/prepare-host-ubuntu.sh ${TF_VAR_SSH_USERNAME}@${TF_VAR_SSH_HOST}:/tmp
ssh ${TF_VAR_SSH_USERNAME}@${TF_VAR_SSH_HOST} "sudo bash ./tmp/prepare-host-ubuntu.sh ${TF_VAR_SSH_USERNAME}"

4. Include the module with VMs definition in your code

Using this module is pretty simple. It's only about including the module from git and passing data structures with the definition of your VMs composition

module "arm-virtual-machines" {

  source = "git@github.com:achetronic/metal-cloud.git//terraform?ref=v1.0.0"

  # Global configuration
  globals   = local.globals

  # Configuration related to VMs directly
  networks  = local.networks
  instances = local.instances
}

This module can be used to bootstrap VMs for any purpose, but we prepared a complete example for Kubernetes to show the most difficult scenario. The examples can be found inside examples directory

4.1. Define some global aspects

globals structure is there for those variables that affects general aspects of your VMs composition:

locals {
  # Globals definition
  globals = {

    # Configuration for SSH connection parameters
    ssh_connection = {
      host     = var.SSH_HOST
      username = var.SSH_USERNAME

      password = var.SSH_PASSWORD
      key_path = var.SSH_KEY_PATH
      mode     = "password"
    }

    # Parameters related to those files used/thrown at some point on VM creation
    io_files = {

      # Path to the folder containing SSH keys authorized in all the VMs
      external_ssh_keys_path = "./files/input/external-ssh-keys"

      # Path to the folder where instances' autogenerated SSH keys will be stored
      instances_ssh_keys_path = "./files/output"
    }

    # Parameters related to the installable OS on VMs creation
    os = {

      # Distro version to use
      # ATM, only ubuntu is supported, so the version is something like: 23.04
      version = "23.04"
    }
  }
}

4.2. Networking devices

VMs are useless if they are not connected to anything. For this, you can include network devices that can be invoked by VMs later. There are two type of devices supported by the module: nat and macvtap

Let's see a macvtap configuration.

Remember macvtap is intended to connect network interfaces from VMs, directly to some physical interface of the host

locals {
  # Networks definition
  networks = {

    # Configuration for a macvtap device
    external0 = {
      mode = "macvtap"

      # Host physical interface to attach
      # the autogenerated virtual interface
      interface = "eth0"

      # Assignable IP address blocks in CIDR notation
      dhcp_address_blocks = ["192.168.2.40/28"]

      # Address to the gateway
      gateway_address = "192.168.2.1"
    }
  }
}

What about crafting a NAT device?

locals {
  networks = {
    
    # Configuration for a NAT
    virnat0 = {
      mode = "nat"

      # Assignable IP address blocks in CIDR notation
      dhcp_address_blocks = ["10.10.10.0/24"]

      # Address to the gateway
      gateway_address = "10.10.10.1"
    }
  }
}

4.3. Creating some VMs

The goal of this module is creating VMs, so let's create one of them:

locals {

  instances = {
    
    my-wonderful-vm-name = {
      vcpu     = 2
      memory   = 6144
      disk     = 20000000000
      networks = [
        {
          name    = "external0"
          address = "192.168.2.41/24"
          
          # Yes, MAC definition is mandatory
          mac     = "DA:C8:20:7A:37:BF"
        }
      ]
    }
  }
}

Hey, I have a different architecture or a well known machine. Ok, let's define both fields:

hey, listen! At this moment, only aarch64 and x86_64 architectures are supported. However, on ARM64 environments, the boards are not so standard as in x86_64. Because of that, it's possible you don't find the right value for machine field. In that case, just set virt and some tweaks will be automatically applied under the hood to improve the support for them

locals {
  
  instances = {

    kube-master-0 = {
      # Using 'OrangePi 5' as hypervisor. This SBC is quite new, so there is no specific machine type for it
      # Use generic 'virt' to apply all needed patches for this kind of environments
      # Ref: https://www.qemu.org/docs/master/system/target-arm.html
      arch    = "aarch64"
      machine = "virt"

      vcpu     = 2
      memory   = 6144
      disk     = 20000000000
      networks = [
        {
          name    = "external0"
          address = "192.168.2.41/24"
          mac     = "DA:C8:20:7A:37:BF"
        }
      ]
    }
  }
}

5. Create your VMs.

Once you have defined them, just push the red button and create the chaos

terraform init && terraform apply

Security considerations

For security reasons, a random password and an SSH key-pair are auto-generated per instance. This means that each instance has a different password and a different authorized SSH key. They are stored in the tfstate so execute a terraform state list and then show the resource you need by using terraform state show ···

When the terraform apply is complete, all the SSH private key files are exported in order to allow you to access or manage them.

There is a special directory located in the path defined with value globals.io_files.external_ssh_keys_path. This was created for the special case that several well-known SSH keys must be authorized in all the instances at the same time. This can be risky and must be used under your own responsibility. If you need it, place some .pub key files inside, and they will be directly configured and authorized in all the instances.

How to collaborate

Of course, I'm open to external collaborations for this project. For doing it you must:

  • Fork the repository
  • Make your changes to the code
  • Open a PR.

As an advise, the code will be reviewed and tested (always)