Skip to content

openstack on ubuntu with juju

knormoyle edited this page Oct 6, 2014 · 26 revisions

UPDATE: I've switched to using the exact recipe on http://astokes.org/containerize-jujus-local-provider/

ignore this page

Cribbing my notes from the ubuntu-cloud-installer pages. Please forgive. This is all work in progress, and I'm not successful yet. Will update to be a good recipe once I get there.

Check, for instance, http://ubuntu-cloud-installer.readthedocs.org/en/latest/multi-installer.guide.html

This method requires ubuntu 14.04 (trusty) so am trying to do that with a container on 12.04 (precise)

I have some LXC containers with both kernel 3.2 and kernel 3.13 underneath (both ubuntu 12.04 but the 3.13 is the hwe update)

Apparently I can create a container with a specify trusty release. even on the 3.2 kernel and it thinks it's ubuntu 14.04

And I didn't know you can snapshot containers. Maybe useful for debug.

create ubuntu 14.04 container

# to make a trusty ubuntu
sudo lxc-create -t download -n trusty1 -- --dist ubuntu --release trusty --arch amd64

Take snapshot of a container

It’s also possible to take snapshot of a container. To take snapshot of the container ubuntu01, enter the following command:

lxc-stop -n ubuntu01
lxc-snapshot -n ubuntu01

Sample output:

lxc-snapshot -n ubuntu01
    Snapshot of directory-backed container requested.
    Making a copy-clone.  If you do want snapshots, then please create an aufs or overlayfs clone first, snapshot that lxc_container: and keep the original container pristine.

The snapshots will be stored in /var/lib/lxcsnaps/ directory of your original host computer.

ls /var/lib/lxcsnaps/
ubuntu01

Restoring Snapshots

To restore a container from the snapshot, use the following command.

lxc-snapshot -n ubuntu01 -r snap0

install juju

using mr-0x9-cntr1

ssh mr-0xd9
lxc-stop -n cntr1
lxc-start -n cntr1

https://juju.ubuntu.com/install

sudo add-apt-repository ppa:juju/stable
sudo apt-get update && sudo apt-get install juju-core

ubuntu openstack installer

alternatives: juju-deployer, devstack

http://astokes.org/ubuntu-openstack-installer/'

http://astokes.org/ubuntu-openstack-installer-upcoming-ui-enhancements/

installer github page is experimental branch

https://github.com/Ubuntu-Solutions-Engineering/cloud-installer

multi installer guide

http://ubuntu-cloud-installer.readthedocs.org/en/latest/multi-installer.guide.html

Requirements

Decent machine, tested on a machine with 8 cores, 12G ram, and 100G HDD.
Ubuntu Trusty 14.04
Juju 1.18.3+ (includes support for lxc fast cloning for multiple providers)
About 30 minutes  

They recommend Ubuntu Server..but I'm trying Ubuntu Desktop

sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:maas-maintainers/stable
sudo apt-add-repository ppa:cloud-installer/ppa

or cloud-installer experimental

sudo apt-add-repository ppa:cloud-installer/experimental

Openstack install

I went back to having the default use of lxcbr0 in the container, and whatever dhcp it uses for the local network. Openstack seems to want dchp and a local network by default. It actually tries to modify the host /etc/network/interfaces if you say you want things visible when you get to that prompt. Don't do for now..just assume we want a openstack that only sees itself and is bridged thru the host with lxcbr0

Check the container ifconfig and whether you can ping out beforehand

root@trusty1:~# ping mr-0x1
PING mr-0x1 (172.16.2.171) 56(84) bytes of data.
64 bytes from mr-0x1.0xdata.loc (172.16.2.171): icmp_seq=1 ttl=63 time=16.0 ms
64 bytes from mr-0x1.0xdata.loc (172.16.2.171): icmp_seq=2 ttl=63 time=140 ms
64 bytes from mr-0x1.0xdata.loc (172.16.2.171): icmp_seq=3 ttl=63 time=69.0 ms
64 bytes from mr-0x1.0xdata.loc (172.16.2.171): icmp_seq=4 ttl=63 time=103 ms

root@trusty1:~# ifconfig
eth0  Link encap:Ethernet  HWaddr 00:16:3e:9a:1c:d0  
      inet addr:10.0.3.68  Bcast:10.0.3.255  Mask:255.255.255.0
      inet6 addr: fe80::216:3eff:fe9a:1cd0/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:19793 errors:0 dropped:0 overruns:0 frame:0
      TX packets:11162 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:29246567 (29.2 MB)  TX bytes:786539 (786.5 KB)

lo    Link encap:Local Loopback  
      inet addr:127.0.0.1  Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host
      UP LOOPBACK RUNNING  MTU:65536  Metric:1
      RX packets:0 errors:0 dropped:0 overruns:0 frame:0
      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0 

kvm support within a lxc container?

This is needed to get the openstack install inside the container. Note we're just editting (adding) to the rc.local for the container. Can do this manually (which is probably safer for not corrupting any current contents)

Adam says: http://astokes.org/containerize-jujus-local-provider/

I didn't do this before. My container was with lxcbr0 on 10.0.3.1

Update the container’s lxcbr0 to be on its own network:

ubuntu@fluffy:~$ cat <<-EOF | sudo tee /var/lib/lxc/joojoo/rootfs/etc/default/lxc-net
USE_LXC_BRIDGE="true"
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.0.4.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.4.0/24"
LXC_DHCP_RANGE="10.0.4.2,10.0.4.254"
LXC_DHCP_MAX="253"
EOF

Adam says: Create the necessary character files for kvm support within lxc via mknod, also persist them through reboots. (translated to my cntr1)

do this on the host or rc.local in the container Use the right container name if not trusty1

cat <<-EOF | sudo tee /var/lib/lxc/trusty1/rootfs/etc/rc.local
#!/bin/sh
mkdir -p /dev/net || true
mknod /dev/kvm c 10 232
mknod /dev/net/tun c 10 200
exit 0
EOF

in the container, this should now work (after container reboot and install of cloud-installer)

sudo kvm-ok

You can google search about ubuntu kvm support (kernel hardware virtualization) http://www.howtogeek.com/117635/how-to-install-kvm-and-create-virtual-machines-on-ubuntu/

install this in the container

apt-get install -qyf libvirt-bin uvtool uvtool-libvirt software-properties-common

users should be group libvirtd

usermod -a -G libvirtd ubuntu
usermod -a -G libvirtd kevin

Just to give a more visual representation of the setup: LXC's within LXC?

Baremetal Machine
- LXC Container
  - Runs juju bootstrap agent
  - KVM (machine 1)
    - Houses a bunch of LXC's for the openstack services
  - KVM (machine 2)
    - Houses nova-compute
  - KVM (machine 3)
    - Houses quantum-gateway
install
sudo apt-get update
sudo apt-get install cloud-installer

The installer should be run as a non-root user

adduser kevin
su - kevin
sudo cloud-install

Ignore this section. Was trying to hack python3/2.7 on a 12.04 system. Don't do. Just get 14.04 on the container

ah. wants python3 and python > 2.7. No way around this. have to install ubuntu 14.04 in my container. kernel is 3.13 from trusty, so that's good.. The following packages have unmet dependencies:

 cloud-installer : Depends: python3-maasclient but it is not going to be installed
               Depends: python3-macumba but it is not going to be installed
               Depends: python3-oauthlib but it is not installable
               Depends: python3-requests-oauthlib but it is not installable
               Depends: python3-termcolor but it is not installable
               Depends: python3-ws4py but it is not installable
               Depends: python3:any (>= 3.3.2-2~)
               Depends: python:any (>= 2.7.5-5~)


root@mr-0x9-cntr1:/etc/apt# python --version
Python 2.7.3

Setting a openstack password

When asked to set the openstack password it should be noted that this password is used throughout all openstack related services (ie Horizon login password). The only service that does not use this password is juju-gui.

Uninstall

To uninstall and cleanup your system run the following

sudo cloud-install -u

Decided to enable the dns (also to switch to lxcbr0 instead of br0 for the container to host)

Uncomment this line /etc/default/lxc-net

#LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf

stop all containers, restart lxc-net:

service lxc-net restart

Configure ip addresses in /etc/lxc/dnsmasq.conf

dhcp-host={NAME},10.0.3.2

where {NAME} is the name of your LXC container:

/var/lib/lxc/{NAME}

Seems like my container with lxcbr0 to host is using 10.0.3.x addresses, but the dnsmasq that's now running uses 10.0.4.x addresses

lxc-dns+   667  0.0  0.0  28208   964 ?        S    18:42   0:00 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.4.1 --dhcp-range 10.0.4.2,10.0.4.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

libvirt+   798  0.0  0.0  28208   968 ?        S    18:42   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf

This lxcbr0 is running in my container

 lxcbr0    Link encap:Ethernet  HWaddr de:05:37:ad:a7:ce  
      inet addr:10.0.4.1  Bcast:10.0.4.255  Mask:255.255.255.0
      inet6 addr: fe80::dc05:37ff:fead:a7ce/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:0 errors:0 dropped:0 overruns:0 frame:0
      TX packets:37 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0 
      RX bytes:0 (0.0 B)  TX bytes:6606 (6.6 KB)

My container uses this

eth0  Link encap:Ethernet  HWaddr 00:16:3e:9a:1c:d0  
      inet addr:10.0.3.2  Bcast:10.0.3.255  Mask:255.255.255.0
      inet6 addr: fe80::216:3eff:fe9a:1cd0/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:597 errors:0 dropped:0 overruns:0 frame:0
      TX packets:436 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:695543 (695.5 KB)  TX bytes:50776 (50.7 KB)

here's another interesting bridge that was created in my container

virbr0    Link encap:Ethernet  HWaddr 02:39:0a:23:7d:23  
      inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
      UP BROADCAST MULTICAST  MTU:1500  Metric:1
      RX packets:0 errors:0 dropped:0 overruns:0 frame:0
      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0 
      RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

My host seems to listening for the 10.0.3.x dns. Also another 10.0.4.x dnsmasq seems visible there? That might be from inside the container

125       1817  0.0  0.0  25976  1008 ?        S    10:11   0:00 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --conf-file=/etc/lxc/dnsmasq.conf --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

usbmux   13480  0.0  0.0  28208   964 ?        S    11:42   0:00 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.4.1 --dhcp-range 10.0.4.2,10.0.4.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative

avahi    13694  0.0  0.0  28208   968 ?        S    11:42   0:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf

He says

The floating IP’s are automatically configured based on your current network. On the nova-cloud-installer machine we place a script in /tmp/nova-controller-setup.sh which defines the networks used in Openstack. If your IP’s differ from what your network device uses then you’ll want to manually create your networks in neutron.

Trying ssh into 10.0.3.1 (host) and 10.0.4.1 (container) to see where they go.

need to say okay for authorized keys need to enable root login to the container /etc/sshd*

ubuntu 14.04 has more restrictive default

/etc/ssh/sshd_config:

PermitRootLogin without-password

change to

PermitRootLogin yes

then

service ssh restart

Probably want to exchange keys for passwordless ssh? (in the container)

ssh-keygen
ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.3.1
ssh-copy-id -i ~/.ssh/id_rsa.pub 10.0.4.1

Then test that it worked.

hmm..what about this guy virbr0 Link encap:Ethernet HWaddr 02:39:0a:23:7d:23
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0

he's there

ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.045 ms

ssh to him too for good measure

ssh 192.168.122.1
The authenticity of host '192.168.122.1 (192.168.122.1)' can't be established.
ECDSA key fingerprint is be:c8:55:bf:48:93:07:13:b5:61:dc:d8:24:87:21:d1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.122.1' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-36-generic x86_64)

(I removed the experimental cloud-installer from /etc/apt/sources.list.d just have the stable one)

now sudo cloud-install Doesn't like me as root (/var/log/cloud-install.log) I'll use kevin. I added libvirtd group above to kevin and ubuntu

2014-10-05 19:22:13 + sudo -H -u root juju bootstrap
2014-10-05 19:22:14 WARNING unused config option: "tools-dir" -> ""
2014-10-05 19:22:14 uploading tools for series [precise trusty]
2014-10-05 19:22:17 WARNING unused config option: "tools-dir" ->     "/root/.juju/local/storage/tools/releases"
2014-10-05 19:22:17 Bootstrap failed, destroying environment
2014-10-05 19:22:17 ERROR bootstrapping a local environment must not be done as root

Another paper (nice with pictures). This might help before I do cloud-install in the container.

For me, all this is within a LXC container..i.e. LXCs within my trusty1 LXC

https://insights.ubuntu.com/wp-content/uploads/Using_Juju_Local_Provider_KVM_and_LXC_Ubuntu_1404LTS.pdf Using Juju with a Local Provider with KVM and LXC in Ubuntu 14.04 LTS A Dell and Canonical Technical White Paper Mark Wenning Canonical Field Engineer Jose De la Rosa Dell Software Engineer

KVM

The first step is to install virtualization packages:

sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils  virt-manager qemu-system

virt-manager and qemu-system are nice if you want to watch the VMs be created as the juju charms are brought up. After installing KVM, it is a good idea to reboot your system (container)

Juju

As of this writing, the “stock” Trusty install will install juju 1.18.1. In order to use some of the “advanced” features here, we will need at least juju 1.18.3. Therefore, we must install from the juju “stable” PPA (Personal Package Archive):

sudo add-apt-repository ppa:juju/stable  

sudo apt-get update
sudo apt-get install juju-core charm-tools juju-local juju-quickstart
sudo apt-get install uvtool-libvirt uvtool
Juju plug-ins

Juju plug-ins are a set of scripts that add functionality to Juju. Though optional, they make it much easier to debug and observe what is going on. Fetch the plug-in source:

su - kevin
cd ~/.
sudo apt-get install git
git clone https://github.com/juju/plugins.git ~/.juju-plugins

Update your system path so that juju can find the new scripts: (why not bashrc. decided to use .bashrc instead)

echo 'PATH=$PATH:$HOME/.juju-plugins' >> ~/.bash_profile
source ~/.bash_profile
juju help plugins

You can run “juju help plugins” to see the new plugin commands.

$ juju help plugins
ERROR 'juju-pdb --description': exit status 2
ERROR 'juju-get-filter --description': exit status 1
ERROR 'juju-run --description': exit status 2
ERROR 'juju-backup --description': exit status 255

Juju Plugins

Plugins are implemented as stand-alone executable files somewhere in the user's PATH.
The executable command must be of the format juju-<plugin name>.

backup      error occurred running 'juju-backup --description'
bundle      Tools for managing bundles
charm       Tools for authoring and maintaining charms
clean       Destroy environment and clean any remaining files
debug       Resolve a unit and immediately launch debug hooks
get-filter  error occurred running 'juju-get-filter --description'
git-charm   Clone and keep up-to-date a git repository containing a Juju charm for easy source managing.
kill        Destroy a juju object and reap the environment.
metadata    tools for generating and validating image and tools metadata
pdb         error occurred running 'juju-pdb --description'
pprint      Collapsed, pretty version of juju status
quickstart  set up a Juju environment (including the GUI) in very few steps
restore     Restore a backup made with juju backup
run         error occurred running 'juju-run --description'
setall      Set a configuration option across all services
test        Execute a charms functional tests
Spectating

Start up a second terminal and run “watch juju status” to watch as the charms and machines get created and started. For a more concise status (only showing the services), run “watch juju pprint”.

Run “virt-manager” to bring up the VM manager GUI. This will show the KVM sessions as juju creates and starts them. (oddly, ubuntu 14.04 will just say "can't find" if you don't install it first)

sudo apt-get install virt-manager
sudo virt-manager

Once juju-gui is up, you can also log into that service and see the charms and relationships as they are created.

Other possibilities

https://juju.ubuntu.com/docs/config-LXC.html

Clone this wiki locally