1. mcp-pxe : 10.100.0.0/24
2. mcp-control : 10.101.0.0/24
3. mcp-data(tenant) : 10.102.0.0/24
4. mcp-public : 172.17.18.32/27
5. mcp-proxy : 172.17.18.0/27
-
Deploy the Foundation physical node.
-
Configure bridges on the Foundation node:
br-mgm for the management network br-ctl for the control network
-
Log in to the Foundation node.
mkdir -p /var/lib/libvirt/images/cfg01/ wget http://images.mirantis.com/cfg01-day01.qcow2 -O /var/lib/libvirt/images/cfg01/system.qcow2 cp /path/to/prepared-drive/cfg01-config.iso /var/lib/libvirt/images/cfg01/cfg01-config.iso
-
Create the Salt Master VM domain definition:
virt-install --name cfg01.mirantis.local \ --disk path=/var/lib/libvirt/images/cfg01/system.qcow2,bus=virtio,cache=none \ --disk path=/var/lib/libvirt/images/cfg01/cfg01-config.iso,device=cdrom \ --network bridge:br-mgm,model=virtio \ --network bridge:br-ctl,model=virtio \ --ram 16384 --vcpus=8 --accelerate \ --boot hd --vnc --noreboot --autostart
-
Download the shell script from GitHub:
export MCP_VERSION="2018.4.0" wget https://github.com/Mirantis/mcp-common-scripts/blob/${MCP_VERSION}/predefine-vm/define-vm.sh
chmod 0755 define-vm.sh export VM_NAME="cfg01.sct.mr.ericsson.se" export VM_SOURCE_DISK="/var/lib/libvirt/images/cfg01/system.qcow2" export VM_CONFIG_DISK="/var/lib/libvirt/images/cfg01/cfg01.sct.mr.ericsson.se-config.iso" export VM_MGM_BRIDGE_NAME="br-mgm" export VM_CTL_BRIDGE_NAME="br-ctl" export VM_MEM_KB="16777216" export VM_CPUS="8"
./define-vm.sh
-
Start the Salt Master node VM:
virsh start cfg01.mirantis.local virsh console cfg01.mirantis.local
Note: all class will be placed at /srv/salt
-
Verify that the following states are successfully applied during the execution of cloud-init:
/var/lib/cloud/instance/scripts/part-001 salt-call state.sls linux.system,linux,openssh,salt salt-call state.sls maas.cluster,maas.region,reclass
-
In case of using kvm01 as the Foundation node, perform the following steps on it:
a. Depending on the deployment type, proceed with one of the options below:
a. deb [arch=amd64] http://repo.saltstack.com/apt/ubuntu/16.04/amd64/2016.3/ xenial main b. Install the salt-minion package.
apt-get install salt-minion=2016.3.8
Note : fix key
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys <Key>
c. Modify /etc/salt/minion.d/minion.conf:
id: <kvm01_FQDN> master: <Salt_Master_IP_or_FQDN>
d. Restart the salt-minion service
e. Check the output of salt-key command on the Salt Master node to verify that the minion ID of kvm01 is present.
-
edit sshd_config for root Login
-
Add user ssh key
-
Copy qcow2 images to cfg nodes
Login to cfg01
a. mkdir /srv/salt/env/prd/images
b. cd /srv/salt/env/prd/images
c. wget http://images.mirantis.com.s3.amazonaws.com/ubuntu-16-04-x64-mcp2018.1.qcow2
Images from images.mirantis.net
d. edit infra/init.yml
Replace :
salt_control_trusty_image: http://images.mirantis.com/ubuntu-14-04-x64-mcp${_param:apt_mk_version}.qcow2
With :
salt_control_trusty_image: salt://images/ubuntu-14-04-x64-mcp${_param:apt_mk_version}.qcow2
-
add DHCP interfaces to all virtual networks
Adding ens2 interface for deploynetwork
-
edit stacklight/networking/virtual.yml
add below line to interface
ens2: ${_param:linux_dhcp_interface} ./stacklight/networking/virtual.yml ./openstack/networking/virtual.yml ./opencontrail/networking/virtual.yml ./cicd/networking/virtual.yml
-
-
proxy command to connect maas
ssh -f root@hp01 -L 8080:10.100.0.15:8080 -N
-
Enable Swap on cfg node if it is less memory node.
sudo fallocate -l 1G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
-
Create manage-project in cicd01 Nodes
chmod +x /usr/local/bin/manage-projects
-
Fix Gerrit tags issues on CICD Nodes
Workaround:
a. In Gerrit UI add access rule `Forge Committer Identity` for group `Administrators` to `refs/tags/*` for project `All-Projects` b. In Gerrit UI remove projects `mcp-ci/pipeline-library` and ` mk/mk-pipelines` c. On cid01 node remove `/srv/jeepyb` directory d. On cid01 node run Salt state `gerrit.client`
-
Verify that the volume is mounted on Docker Swarm nodes:
salt '*' cmd.run 'systemctl -a|grep "GlusterFS File System"|grep -v mounted'
-
check all repos are in same tag and also for duplicate repos
-
Log in to the MAAS web UI through salt_master_management_address/MAAS with the following credentials:
Username: mirantis Password: r00tme
-
Go to the Subnets tab.
-
Select the fabric that is under the deploy network.
-
In the VLANs on this fabric area, click the VLAN under the VLAN column where the deploy network subnet is.
-
In the Take action drop-down menu, select Provide DHCP.
-
Adjust the IP range as required.
Note The number of IP addresses should not be less than the number of the planned VCP nodes.
-
Click Provide DHCP to submit.
- Define all physical nodes under classes/cluster//infra/maas.yml using the following structure.
For example, to define the kvm02 node:
maas:
region:
machines:
kvm02:
interface:
mac: 00:25:90:eb:92:4a
power_parameters:
power_address: kvm02.ipmi.net
power_password: password
power_type: ipmi
power_user: ipmi_user
-
To get MAC addresses from IPMI, you can use the ipmi tool. Usage example for Supermicro:
ipmitool -U ipmi_user-P passowrd -H kvm02.ipmi.net raw 0x30 0x21 1| tail -c 18
-
Once you have defined all physical servers in your Reclass model, enforce the nodes:
salt-call maas.process_machines
-
All nodes are automatically commissioned.
Verify the status of servers either through the MAAS web UI or using the salt call command:
salt-call maas.machines_status
- The successfully commissioned servers appear in the ready status.
(Optional) Enforce the interfaces configuration defined in the model for servers:
salt-call state.sls maas.machines.assign_ip
-
(Optional) Enforce the disk custom configuration defined in the model for servers:
salt-call state.sls maas.machines.storage
Verify that all servers have correct NIC names and configurations.
- Log in to the MAAS node console.
Type the salt-call command:
salt-call maas.deploy_machines
-
Check the status of the nodes:
-
When all servers have been provisioned, perform the verification of the servers automatic registration by running the
salt-key
-
Copy any example file from
cfg01:/srv/salt/reclass/classes/system/openssh/server/team/
to
cfg01:/srv/salt/reclass/classes/cluster/snv/infra/chandra.yml
edit the file as required
-
add the user class to init files at
/infra/init.yml
vim /srv/salt/reclass/classes/cluster/snv/infra/chandra.yml - cluster.snv.infra.chandra
-
run user and openssh state to create new user
salt '*' cmd.run 'salt-call state.sls linux.system.user,openssh'
-
If encounter with any error try to run to check any miss-configration
reclass-salt --top
-
Verify that the cfg01 key has been added to Salt and your host FQDN is shown properly in the Accepted Keys field in the output of the following command:
salt-key
-
Verify that all pillars and Salt data are refreshed:
salt "*" saltutil.refresh_pillar salt "*" saltutil.sync_all
-
Verify that the Reclass model is configured correctly. The following command output should show top states for all nodes:
reclass-salt --top
-
To verify that the rebooting of the nodes, which will be performed further, is successful, create the trigger file:
salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway' cmd.run "touch /run/is_rebooted"
-
For KVM nodes:
salt --async -C 'I@salt:control' cmd.run 'salt-call state.sls linux.system.user,openssh,linux.network;reboot'
-
For compute nodes:
salt --async -C 'I@nova:compute' cmd.run 'salt-call state.sls linux.system.user,openssh,linux.network;reboot'
-
For gateway nodes, execute the following command only for the deployments with OVS setup with physical gateway nodes:
salt --async -C 'I@neutron:gateway' cmd.run 'salt-call state.sls linux.system.user,openssh,linux.network;reboot'
-
Verify that the targeted nodes are up and running:
salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway' test.ping
-
Check the previously created trigger file to verify that the targeted nodes are actually rebooted:
salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway' cmd.run 'if [ -f "/run/is_rebooted" ];then echo "Has not been rebooted!";else echo "Rebooted";fi'
All nodes should be in the Rebooted state.
-
Verify that the hardware nodes have the required network configuration. For example, verify the output of the ip a command:
salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway' cmd.run "ip a"
-
On the Salt Master node, prepare the node operating system by running the Salt linux state:
salt-call state.sls linux -l info
-
Verify that the Salt Minion nodes are synchronized by running the following command on the Salt Master node:
salt '*' saltutil.sync_all
-
Perform the initial Salt configuration:
salt 'kvm*' state.sls salt.minion
-
Set up the network interfaces and the SSH access:
salt -C 'I@salt:control' cmd.run 'salt-call state.sls linux.system.user,openssh,linux.network;reboot'
-
Run the libvirt state:
salt 'kvm*' state.sls libvirt
-
( optional only needed when ovs is enabled ) Add system.salt.control.cluster.openstack_gateway_single to infra/kvm.yml to enable a gateway VM for your OpenStack environment.
-
Run salt.control to create virtual machines. This command also inserts minion.conf files from KVM hosts:
salt 'kvm*' state.sls salt.control
-
Verify nodes
salt-key
- To set up the physical nodes for CI/CD:
Enable virtual IP:
salt -C 'I@salt:control' state.sls keepalived
Deploy the GlusterFS cluster:
salt -C 'I@glusterfs:server' state.sls glusterfs.server.service
salt -C 'I@glusterfs:server and *01*' state.sls glusterfs.server.setup
Note: If using single kvm machine. then apply this fix
On kvm01 node:
vim vim /usr/lib/python2.7/dist-packages/salt/modules/glusterfs.py
Comment out this lines
# if replica:
# cmd += 'replica {0} '.format(replica)
- CD/CD deployment
a. Perform the initial configuration:
salt 'ci*' cmd.run 'salt-call state.sls salt.minion'
salt 'ci*' state.sls salt.minion,linux,openssh,ntp
b. Mount Gluster volumes from the KVM nodes:
salt -C 'I@glusterfs:client and I@docker:host' state.sls glusterfs.client
c. Configure virtual IP and HAProxy balancing:
salt -C 'I@haproxy:proxy and I@docker:host' state.sls haproxy,keepalived
d. Install Docker:
salt -C 'I@docker:host' state.sls docker.host
e. Initial Docker swarm leader:
salt -C 'I@docker:swarm:role:master' state.sls docker.swarm
f. Update the Salt mine to enable other swarm nodes to connect to leader:
salt -C 'I@docker:swarm' state.sls salt
salt -C 'I@docker:swarm' mine.flush
salt -C 'I@docker:swarm' mine.update
g. Synchronize modules and states:
salt -C 'I@docker:swarm' saltutil.sync_all
h. Complete the Docker swarm deployment:
salt -C 'I@docker:swarm' state.sls docker.swarm
i. Verify that all nodes are in the cluster:
salt -C 'I@docker:swarm:role:master' cmd.run 'docker node ls'
j. Apply the aptly.publisher state:
salt -C 'I@aptly:publisher' state.sls aptly.publisher
k. Start the CI/CD containers, for example, MySQL, Aptly, Jenkins, Gerrit, and others:
salt -C 'I@docker:swarm:role:master' state.sls docker.client
l. (optional) Configure the Aptly service:
salt -C 'I@aptly:server' state.sls aptly
m. Configure the OpenLDAP service for Jenkins and Gerrit:
salt -C 'I@openldap:client' state.sls openldap
n. Configure the Gerrit service, create users, projects, and so on:
salt -C 'I@gerrit:client' state.sls gerrit
o. Configure the Jenkins service, create users, add pipelines, and so on:
salt -C 'I@jenkins:client' state.sls jenkins
Log in to the Jenkins web UI as admin.
The password for the admin user is defined in the classes/cluster/<cluster_name>/cicd/control/init.yml file of the Reclass model under the openldap_admin_password parameter variable.
In the global view, verify that the git-mirror-downstream-mk-pipelines and git-mirror-downstream-pipeline-library pipelines have successfully mirrored all content.
a. Set up network interfaces and the SSH access on all compute nodes:
salt -C 'I@nova:compute' cmd.run 'salt-call state.sls \
linux.system.user,openssh,linux.network;reboot'
b. If you run OVS, run the same command on physical gateway nodes as well:
salt -C 'I@neutron:gateway' cmd.run 'salt-call state.sls \
linux.system.user,openssh,linux.network;reboot'
c. Verify that all nodes are ready for deployment:
salt '*' state.sls linux,ntp,openssh,salt.minion
-
To deploy Keepalived:
salt -C 'I@keepalived:cluster' state.sls keepalived -b 1
-
Determine the VIP address for the current environment:
salt -C 'I@keepalived:cluster' pillar.get keepalived:cluster:instance:VIP:address
-
Verify if the obtained VIP address is assigned to any network interface on one of the controller nodes:
salt -C 'I@keepalived:cluster' cmd.run "ip a | grep <ENV_VIP_ADDRESS>"
-
To deploy NTP:
salt '*' state.sls ntp
-
To deploy GlusterFS:
salt -C 'I@glusterfs:server' state.sls glusterfs.server.service salt -C 'I@glusterfs:server' state.sls glusterfs.server.setup -b 1
To verify GlusterFS:
salt -C 'I@glusterfs:server' cmd.run "gluster peer status; gluster volume status" -b 1
-
Apply the rabbitmq state:
salt -C 'I@rabbitmq:server' state.sls rabbitmq
Verify the RabbitMQ status:
salt -C 'I@rabbitmq:server' cmd.run "rabbitmqctl cluster_status"
-
Apply the galera state:
salt -C 'I@galera:master' state.sls galera salt -C 'I@galera:slave' state.sls galera
Verify that Galera is up and running:
salt -C 'I@galera:master' mysql.status | grep -A1 wsrep_cluster_size salt -C 'I@galera:slave' mysql.status | grep -A1 wsrep_cluster_size
recovery Galera
https://docs.mirantis.com/mcp/q4-18/mcp-operations-guide/tshooting/tshoot-mcp-openstack/tshoot-galera/restore-galera-cluster/restore-galera-manually.html
-
To deploy HAProxy:
salt -C 'I@haproxy:proxy' state.sls haproxy salt -C 'I@haproxy:proxy' service.status haproxy salt -I 'haproxy:proxy' service.restart rsyslog
-
To deploy Memcached:
salt -C 'I@memcached:server' state.sls memcached
-
To deploy Keystone:
Set up the Keystone service:
salt -C 'I@keystone:server' state.sls keystone.server -b 1
Restart Apache2
salt -C 'I@keystone:server' service.restart apache2
Populate keystone services/tenants/admins:
salt -C 'I@keystone:client' state.sls keystone.client salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; openstack service list"
Note:
-
To deploy Glance:
Install Glance and verify that GlusterFS clusters exist:
salt -C 'I@glance:server' state.sls glance -b 1
salt -C 'I@glusterfs:client' state.sls glusterfs.client
Update Fernet tokens before doing request on the Keystone server. Otherwise, you will get the following error: No encryption keys found; run keystone-manage fernet_setup to bootstrap one:
salt -C 'I@keystone:server' state.sls keystone.server
salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; glance image-list"
- To deploy the Nova:
Install Nova:
salt -C 'I@nova:controller' state.sls nova -b 1
salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; nova service-list"
On one of the controller nodes, verify that the Nova services are enabled and running:
root@cfg01:~# ssh ctl01 "source keystonerc; nova service-list"
- To deploy Cinder:
Install Cinder:
salt -C 'I@cinder:controller' state.sls cinder -b 1
On one of the controller nodes, verify that the Cinder service is enabled and running:
salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; cinder list"
-
To install Neutron:
salt -C 'I@neutron:server' state.sls neutron -b 1 salt -C 'I@neutron:gateway' state.sls neutron salt -C 'I@keystone:server' cmd.run ". /root/keystonerc; neutron agent-list"
-
To install Horizon:
salt -C 'I@horizon:server' state.sls horizon salt -C 'I@nginx:server' state.sls nginx
To install proxy nodes:
-
Add NAT for br2:
root@kvm01:~# iptables -t nat -A POSTROUTING -o br2 -j MASQUERADE root@kvm01:~# echo “1” > /proc/sys/net/ipv4/ip_forward root@kvm01:~# iptables-save > /etc/iptables/rules.v4
-
Deploy linux, openssh, and salt states to the proxy nodes:
root@cfg01:~# salt 'prx*' state.sls linux,openssh,salt
-
Verify the connection to Horizon:
You may need first to configure a SOCKS proxy or similar to the environment network to gain access from within your browser. In a browser, connect to each of the proxy IPs and its VIP to verify that they are active.
-
Verify that the new machines have connectivity with the Salt Master node:
salt 'cmp*' test.ping
-
Run the reclass.storage state to refresh the deployed pillar data:
salt 'cfg*' state.sls reclass.storage
-
Apply the Salt data sync and base states for Linux, NTP, OpenSSH, and Salt on the target nodes:
salt 'cmp*' saltutil.sync_all salt 'cmp*' saltutil.refresh_pillar salt 'cmp*' state.sls linux,ntp,openssh,salt
-
Apply all states for the target nodes:
salt 'cmp*' state.apply