-
Notifications
You must be signed in to change notification settings - Fork 4
Milestone 2 : Project Setup Instructions
Download or pull the repository in to your local machine.
git clone https://github.com/airavata-courses/CAPtivate.git
- Go to the folder containing the code in the command prompt.
- Run the application using the following command.
docker-compose up
To check if images are built in your local machine in Docker using the following command.
docker image ls
You can check the service running at using browser of your choice
http://localhost:3000
- Docker/Docker Desktop(https://docs.docker.com/docker-for-windows/install/)
- Kubernetes
- Go to the
deployments
folder in your directory containing the code. - Run the following commands in a command prompt.
kubectl apply -f zookeeper.yaml
kubectl apply -f kafka.yaml
kubectl apply -f db.yaml
kubectl apply -f services.yaml
kubectl apply -f api-gateway.yaml
kubectl apply -f user-management.yaml
kubectl apply -f session-management.yaml
kubectl apply -f data-retrieval.yaml
kubectl apply -f model-execution.yaml
kubectl apply -f post-process.yaml
kubectl apply -f ui.yaml
You will be able to check the deployed application by using the url from ui deployment of
kubectl get services
- Kubernetes cluster set up on the VMs in jetstream
-
Login into https://iu.jetstream-cloud.org/auth/login/ and ensure you are under TG-CCR*****43 project allocation and download OpenRC File v3 from the dropdown (top right corner).
-
Open your Git Bash terminal and navigate to the path where you placed the OpenRC v3 file and run the following command.
source <OpenRC file>
You can follow the below steps to use our network and security groups to create VM's (recommended) or you create your own VM network and security groups by following the openstack CLI commands
- Create SSH key pair to access VMs in the OpenStack environment. It will ask you for the password, enter your IU jetstream password.
If you already have an SSH key then follow
cd ~/.ssh
openstack keypair create --public-key id_rsa.pub tg865486-api-key
If you dont have SSH key then follow:
ssh-keygen -b 2048 -t rsa -f tg865486-api-key -P ""
openstack keypair create --public-key tg865486-api-key.pub tg865486-api-key
- Now we will create VM's to set up the architecture. You can replace the VM names (VM-master, VM-worker1, VM-worker2) with appropriate names as you like.
openstack server create <VM-master> \
--flavor m1.small \
--image JS-API-Featured-Ubuntu18-Feb-14-2020 \
--key-name tg865486-api-key \
--security-group CAPtivate_security_group \
--nic net-id=tg865486_Captivate_net \
--wait
openstack server create <VM-worker1> \
--flavor m1.small \
--image JS-API-Featured-Ubuntu18-Feb-14-2020 \
--key-name tg865486-api-key \
--security-group CAPtivate_security_group \
--nic net-id=tg865486_Captivate_net \
--wait
openstack server create <VM-worker2> \
--flavor m1.small \
--image JS-API-Featured-Ubuntu18-Feb-14-2020 \
--key-name tg865486-api-key \
--security-group CAPtivate_security_group \
--nic net-id=tg865486_Captivate_net \
--wait
- Now we will assign IPs to all 3 VMs.
openstack floating ip create public
openstack server add floating ip <VM-master> <your.ip.number.here>
openstack floating ip create public
openstack server add floating ip <VM-worker1> <your.ip.number.here>
openstack floating ip create public
openstack server add floating ip <VM-worker2> <your.ip.number.here>
Now we have the VMs ready. To set up the Kubernetes system we will run some Ansible playbooks.
-
Docker
-
Ubuntu server/machine : This server acts as an Ansible control node which we will use to connect to and control Ansible hosts (Ansible hosts are our created VM's) over SSH. Your Ansible control node can either be your local machine or a server dedicated to running Ansible.
Make sure you are a non root user and have an SSH key setup. You can follow the link for further clarifications
- Setup Ansible
sudo apt-add-repository ppa:ansible/ansible
Press ENTER when prompted to accept the PPA addition.
sudo apt update
sudo apt install ansible
- Setup Inventory File.
Now you have to replace the master_ip and worker_ip with the IPs of the master and worker VMs you created above.
You have to change the hosts file for that. You open the hosts file by this command.
sudo nano /etc/ansible/hosts
Add the following to the hosts file
[servers]
master ansible_host=<ubuntu user>@<master IP created>
worker1 ansible_host=<ubuntu user>@<worker1 IP created>
worker2 ansible_host=<ubuntu user>@<worker2 IP created>
[servers:vars]
ansible_python_interpreter=/usr/bin/python3
Save and exit from the hosts file.
- Test Connection Next, you can test Ansible playbooks with the help of the ping command.
ansible all -m ping
Once the ping is success copy the kube-cluster folder to your Ubuntu server/machine where you installed ansible.
Download the kube-cluster folder from https://github.com/airavata-courses/CAPtivate/tree/master/docs/kube-cluster
Now just replace the IP adresses of the master and worker VM's with the IP adresses of the VM's you created in the hosts file (~/kube-cluster/hosts).
Now you are all set to run the ansible playbooks. Say yes to adding the fingerprint 3 times. (once for each of the VMs)
ansible-playbook -i hosts ~/kube-cluster/initials.yml
ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml
ansible-playbook -i hosts ~/kube-cluster/master.yml
ansible-playbook -i hosts ~/kube-cluster/workers.yml
Once all the ansible playbooks run successfully you can login to your Kubernetes master using the command:
ssh ubuntu@<VM-master-IP>
To verify your Kubernetes installation, run the following command:
kubectl get nodes
You should see the master and worker nodes working now.
- Clone the repository
git clone https://github.com/airavata-courses/CAPtivate.git
-
cd CAPtivate/deployments
-
Run the following commands in the kubectl prompt, in the given order.
kubectl apply -f zookeeper.yaml
kubectl apply -f kafka.yaml
kubectl apply -f db.yaml
kubectl apply -f services.yaml
kubectl apply -f api-gateway.yaml
kubectl apply -f user-management.yaml
kubectl apply -f session-management.yaml
kubectl apply -f data-retrieval.yaml
kubectl apply -f model-execution.yaml
kubectl apply -f post-process.yaml
kubectl apply -f ui.yaml
You can find the current deployment here - http://149.165.168.90:31703
CI/CD is implemented through travis and a remote jenkins server. Each service have their own deployment branch as follows and the respective travis scripts can be found in .travis.yml
in root of respective branches.
Jenkins build status and pipeline can be verified from http://149.165.169.77:8080/ Username: rakoduru Password : rasmitha
deploy/api-gateway
deploy/user-management
deploy/session-management
deploy/post-process
deploy/user-interface
deploy/model-execution
deploy/data-retrieval