-
Notifications
You must be signed in to change notification settings - Fork 0
Home
- Introduction
- Laravel App SAAS Architecture Overview
- Creating the Laravel App Multi-Tenant Model
- Configuring Nginx Ingress Controller
- Configuring Cert Manager to Obtain SSL
- Creating MySQL and REDIS Database for Laravel App cloud model
- Creating Space keys for Laravel App Cloud Multi Tenant Model
- Configuring Laravel App deployment
- Configuring configmap of .env for Laravel App deploy & cronjobs
- Configuring Metrics Server and Enabling Horizontal Scaling
- Automation of Laravel App Cloud Tenant Deploy
- Jenkins
- ARGO CD
- Conclusion.
This document list down configuration and , how to configure Laravel App Cloud architecture on Digital Ocean.
- The section covers the general architecture overview of the Laravel App SAAS model.
- The below diagram represents the various Kubernetes components and resources used in the Laravel App SAAS model.
- This model supports both wildcard domains and custom domains with SSL.
- As shown in the architectural diagram there can be any number of Laravel App Pods but there must only be one Supervisor Pod. The Laravel App Pod serves the Laravel Applications, and the Supervisor pod is responsible for all the functions supported in the generic Laravel App version.
- The supervisor pod is common for all the tenants.
- The .env file which is generated during the first setup of the Laravel App Multi-Tenant model is then configured as a ConfigMap Kubernetes component and is mounted in all the Laravel App app pods and supervisor pod. When a new app pod is created the ConfigMap will be mounted automatically as per the YAML definition which will be discussed later in this document.
- For every client, there will be separate Database and Space will be created and this information and credentials for each client will be stored in the master tenant database which is created during the first setup of the Laravel App Multi-Tenant model.
- The Redis database will be shared among all the Tenants.
- Additionally, for each tenant, a Cronjob will be created specifically for their domain irrespective of custom or wildcard. These Cronjobs will be triggered every 1 minute.
- Deploy managed Kubernetes cluster on DigitalOcean by selecting the number of nodes and their resource specifications.
- Deploy managed Database and Redis and in the settings allow the IP's and the Kubernetes cluster to communicate with each other.
- Once the cluster is ready configure kubectl on your local machine or any machine that you prefer. follow is the link reference to install and configure doctl, kubectl, generating API on DO to access Kubernetes cluster via kubectl.
- Additionally, install some useful tools like kubens and Kubectx which will allow you to easily switch between contexts and namespaces Click Here.
- Once the above everything is configured deploy any basic Apache pod that exposes port 80 for its service.
- In order to allow outside traffic to reach the pods inside the Kubernetes cluster we need to create Nginx Ingress with a load balancer.
- To deploy the Nginx Ingress Controller in the cluster there are many ways you can use the One-click App from the DigitalOcean or manually install each component from the documentation but the best approach is to use the Kubernetes package manager Helm.
- Once helm package manager is installed follow the instructions from official documentation.
- Nginx-ingress deploy
- Nginx-ingress
- or, use the below commands.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace
- The above commands will install the Nginx ingress controller from the helm repository and will deploy a LoadBalanacer on DigitalOcean you can check the status of the LoadBalancer IP by running the below command.
kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller
- Once the LoadBalancer IP is available add A record in the DigitalOcean registrar for that domain.
- The wildcard domain “example.com” is added to the DigitalOcean Domain registrar and the DigitalOcean nameservers must be added to the actual Domain registrar nameservers refer to the below screenshots to understand further. This is necessary to obtain WildCard SSL certificates from the cert manager which will be discussed later in this document.
- Now that the LoadBalancer IP of DigitalOcean has the A record for the wildcard domain example.com in the DigitalOcean Registrar and has the DigitalOcean NS records in its actual Domain Registrar. So for a custom domain, one must add a CNAME record in their registrar for “example.com” for the traffic to reach the Laravel App SAAS architecture.
- In order to secure the traffic with HTTPS we need to configure cert-manager to obtain Free SSL certificates from Letsencrypt.
- To secure your Ingress Resources, you’ll install Cert-Manager, create a ClusterIssuer for production, and modify the configuration of your Ingress to take advantage of the TLS certificates. ClusterIssuers are Cert-Manager Resources in Kubernetes that provision TLS certificates for the whole cluster. Once installed and configured, your app will be running behind HTTPS.
- Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
- Update your local Helm chart repository cache
helm repo update
- Install cert-manager and its Custom Resource Definitions (CRDs) like Issuers and ClusterIssuers
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.8.0 \
--set installCRDs=true
- To verify our installation, check the cert-manager Namespace for running pods:
kubectl get pods --namespace cert-manager
- Cert Manager obtains certificates based on 2 challenges DNS-01 challenge for a wildcard domain and HTTP-01 for a single domain.
- To obtain SSL certificates for wildcard domain a Kubernetes secret has to be configured with DigitalOcean API in it, to get the secret refer to the below links for further information. How to generate Digital ocean toekn Digital DNS token adding
- Once the secret is created the next step is to configure the ClusterIssuer Kubernetes resource below is the YAML configuration for the cluster issuer.
clusterissuer.yml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
email: test@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
# - selector: {}
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-body-size: "600m"
nginx.org/client-max-body-size: "600m"
nginx.ingress.kubernetes.io/ssl-passthrough: "false"
nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
name: laravelapp-ingress
namespace: default
spec:
tls:
- hosts:
- '*.example.com'
secretName: laravelapp-tls
rules:
- host: '*.example.com'
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: laravelappcloud-svc
port:
number: 80
- The above ingress configuration will obtain a wildcard certificate for example.com
- To obtain SSL certificates for a custom domain an HTTP-01 challenge has to be performed by the cert-manager below is the sample ClusterIssuer and Ingress configuration for a custom Domain
custom-clusterissure.yml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-custom
namespace: cert-manager
spec:
acme:
# Email address used for ACME registration
email: test@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Name of a secret used to store the ACME account private key
name: letsencrypt-custom
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
custom-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-custom
name: custom-ingress
namespace: default
spec:
tls:
- hosts:
- custom.example.com
secretName: custom-tls
rules:
- host: custom.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: laravelappcloud-svc
port:
number: 80
- Now we need to create MySQL and REDIS database for Laravel App in this case we will be using DigitalOcean’s managed MySQL 8 database and REDIS database, to create this you can go to Digital Ocean and select databases it will open the Database overview screen.
- There will be a create icon once you clicked over there we need to select the the MySQL database cluster which has default of version 8 then need to select region and the plan of the cluster and then we need to name it with a proper name, once this is done click on the create database cluster icon at the bottom of the page, this will create the MySQL cluster.
- Once the database creation process completes we need to click on the created database and in the overview of the database that we created there will be and option Get Started it will take you to the networking page there you need to select the Kubernetes cluster and the other public IP’s to from which you will be connecting to the mysql cluster remotely. After adding the details we can move to the next page and there will be the connection details for the mysql cluster like mysql username, mysql user password, port, host take a note on this and this will be always available in the same page we can take it anytime.
- After getting these details create a database by logging into the mysql using the mysql cli commands below:
mysql -h <hostname> -P <port> -u <username> -p<passowrd>
create database <db name>;
- Once the database cluster is created then we need to create REDIS cluster, it is the same process as MySQL cluster creation, do the same as we did before for the MySQL cluster, and don’t forgot to add the networking and after finishing the same creation process it will give us the connection details for the same like Mysql.
- MySQL and REDIS Cluster created successfully.
- We need to create a token for the spaces (S3) so Laravel App can create spaces in DIgitalOcean to do that please follow the below instructions.
- Login to DigitalOcean dashboard on the left panel you can see API click on that and it will take you to token generation page over there next to tokens there will be space keys click on that and generate a new key and copy that somewhere safe.
- We will use this later on Laravel App deployment.
- As the domain and SSL certificates are ready we need to create the Laravel App Pods like Laravel App Pods and Laravel App Supervisor Pods Config files etc..
- Laravel App Application pod yaml: below is the configuration for the apache yml.
deployment.yml
apiVersion: v1
kind: Service
metadata:
name: laravelcloud-svc
namespace: default
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: laravelcloud-pods
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: laravelcloud-deploy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: laravelcloud-pods
minReadySeconds: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: laravelcloud-pods
spec:
# nodeSelector:
# doks.digitalocean.com/node-pool: k8s-pool-1-ray6v
containers:
- name: laravel-apache
image: sample-laravel/laravel-multi-tenant-apache2:dev.82
resources:
limits:
memory: 1024Mi
cpu: 1
requests:
memory: 900Mi
cpu: 800m
livenessProbe:
httpGet:
path: /probe.php
port: 80
timeoutSeconds: 5
periodSeconds: 7
failureThreshold: 4
readinessProbe:
httpGet:
path: /probe.php
port: 80
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 7
volumeMounts:
- name: volume-data
mountPath: /var/www/html
# - name: cm
# mountPath: /var/www/html/.env
# subPath: .env
initContainers:
- name: fetch
image: sample-laravel/fetcher
command: ['sh','-c','apt update;apt install git -y; git clone --branch master https://git-username:git-token@github.com/sample/laravel-app-cloud.git html;chown -R www-data:www-data /html']
volumeMounts:
- name: volume-data
mountPath: /html
volumes:
- name: volume-data
emptyDir: {}
# - name: cm
# configMap:
# name: laravelcloud-env
- The above file is the configuration deployment file for the Laravel Application here we commented few lines which we will uncomment after the initial installation.
- Once this is applied the Laravel App deployment will be created in the cluster with Laravel App app pods which has the Laravel App multi tenancy code on it, we have also created readiness and liveness checks and also we have added rolling updates feature to make the deployment more available.
- Now if we check for the running pods there will be one pod running as we mentioned the replicas to 1.
- We need to connect to that pod container and start the initial installation.
- To connect to the pod :
kubectl exec -it <pod name> – bash
- To get the pod name :
kubectl get pods
- Once connected the terminal will open in the Laravel App root directory, here we need to run the below command.
php artisan tenancy:install
- This will prompt you for the below details
Main domain: billing.example.com
Database username: <db username created above>
Database user password: <db user password created above>
Database Host: <db host name created above>
Database port: <db port created above>
Database Name: <db name created above>
App url: https://billing.example.com
- Once the above is entered and installed if there is no error then the installation was successful
- There are noted errors on the tenancy installation as shown below:
SQLSTATE[HY000]: General error: 3750 Unable to create or change a table without a primary key, when the system variable 'sql_require_primary_key' is set. Add a primary key to the table or unset this variable to avoid this message. Note that tables without a primary key can cause performance problems in row-based replication, so please consult your DBA before changing this setting. (SQL: create table tenants (id varchar(255) not null, created_at timestamp null, updated_at timestamp null, data json null) default character set utf8 collate 'utf8_unicode_ci' engine = Innodb)
- The above error will be resolved with the below solution.
- To resolve the above issue we need to have a DigitalOcean token in hand and then run this command to create a BEARER TOEKN to run two curl commands below:
- To get the bearer token :
export DIGITALOCEAN_TOKEN=<the token we created>
- Then run this command:
curl -X GET \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \ "https://api.digitalocean.com/v2/databases/”
- The above command will give the database id.
curl -X PATCH \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \ -d '{"config": { "sql_require_primary_key": false}}' \ "https://api.digitalocean.com/v2/databases/<your database id>/config"
- Once the above commands are ran we have to retry the php artisan command and the issue will be resolved.
- Once the installation is completed we need to create a configmap of the .env file to mount it to all the Laravel App, supervisor, cronjob pods.
- To create a config map we need to copy the contents of .env file to do that we need to exec into the Laravel App pod that is running with the previous command we used and there will a file name as .env copy it.
- After the Laravel App tenancy installation we get the .env and we got the redis cluster and spaces token in from the above we need those details here to configure the config map.
- The below is the sample of the .env before editing:
apiVersion: v1
kind: ConfigMap
metadata:
name: laravelappcloud-env
namespace: default
data:
.env: |
APP_NAME=laraval:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
APP_DEBUG=false
APP_BUGSNAG=true
APP_URL=https://billing.example.com
APP_KEY=base64:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=
DB_TYPE=mysql
DB_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
DB_PORT="25060"
DB_DATABASE="laraval"
DB_USERNAME="doadmin"
DB_PASSWORD="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
DB_ENGINE=InnoDB
MAIL_MAILER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
QUEUE_DRIVER=sync
REDIS_DATABASE=0
CENTRAL_DOMAIN=billing.example.com
DB_INSTALL=1
CACHE_DRIVER=array
AWS_SECRET_ACCESS_KEY=
AWS_ACCESS_KEY_ID=
AWS_DEFAULT_REGION=
AWS_BUCKET=
AWS_ENDPOINT=
FILESYSTEM_DISK=s3
- We need to add the details of redis cluster, space keys like the below.
apiVersion: v1
kind: ConfigMap
metadata:
name: laravelappcloud-env
namespace: default
data:
.env: |
APP_NAME=laraval:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
APP_DEBUG=false
APP_BUGSNAG=true
APP_URL=https://billing.example.com
APP_KEY=base64:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=
DB_TYPE=mysql
DB_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
DB_PORT="25060"
DB_DATABASE="laraval"
DB_USERNAME="doadmin"
DB_PASSWORD="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
DB_ENGINE=InnoDB
MAIL_MAILER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
QUEUE_DRIVER=sync
REDIS_DATABASE=0
CENTRAL_DOMAIN=billing.example.com
DB_INSTALL=1
CACHE_DRIVER=array
AWS_SECRET_ACCESS_KEY=
AWS_ACCESS_KEY_ID=
AWS_DEFAULT_REGION=
AWS_BUCKET=
AWS_ENDPOINT=
FILESYSTEM_DISK=s3
REDIS_HOST=xxxxxxxxxxxxxxxxxxxxxxxxdb.ondigitalocean.com
REDIS_PORT= 25061
REDIS_PASSWORD=xxxxxxxxxxxxxxxxxx
REDIS_SCHEME=tls
- With the above details now we will create a config map manifest for Laravel App cloud deploy and cronjobs. Below is the one for Laravel App cloud deploy.
configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: laravelappcloud-env
namespace: default
data:
.env: |
APP_NAME=laraval:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
APP_DEBUG=false
APP_BUGSNAG=true
APP_URL=https://billing.example.com
APP_KEY=base64:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=
DB_TYPE=mysql
DB_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
DB_PORT="25060"
DB_DATABASE="laraval"
DB_USERNAME="doadmin"
DB_PASSWORD="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
DB_ENGINE=InnoDB
MAIL_MAILER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
QUEUE_DRIVER=sync
REDIS_DATABASE=0
CENTRAL_DOMAIN=billing.example.com
DB_INSTALL=1
CACHE_DRIVER=array
AWS_SECRET_ACCESS_KEY=
AWS_ACCESS_KEY_ID=
AWS_DEFAULT_REGION=
AWS_BUCKET=
AWS_ENDPOINT=
FILESYSTEM_DISK=s3
REDIS_HOST=xxxxxxxxxxxxxxxxxxxxxxxxdb.ondigitalocean.com
REDIS_PORT= 25061
REDIS_PASSWORD=xxxxxxxxxxxxxxxxxx
REDIS_SCHEME=tls
cron-config.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: laravelappcloud-env
namespace: cronjobs
data:
.env: |
APP_NAME=laraval:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
APP_DEBUG=false
APP_BUGSNAG=true
APP_URL=https://billing.example.com
APP_KEY=base64:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=
DB_TYPE=mysql
DB_HOST="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
DB_PORT="25060"
DB_DATABASE="laraval"
DB_USERNAME="doadmin"
DB_PASSWORD="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
DB_ENGINE=InnoDB
MAIL_MAILER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
QUEUE_DRIVER=sync
REDIS_DATABASE=0
CENTRAL_DOMAIN=billing.example.com
DB_INSTALL=1
CACHE_DRIVER=array
AWS_SECRET_ACCESS_KEY=
AWS_ACCESS_KEY_ID=
AWS_DEFAULT_REGION=
AWS_BUCKET=
AWS_ENDPOINT=
FILESYSTEM_DISK=s3
REDIS_HOST=xxxxxxxxxxxxxxxxxxxxxxxxdb.ondigitalocean.com
REDIS_PORT= 25061
REDIS_PASSWORD=xxxxxxxxxxxxxxxxxx
REDIS_SCHEME=tls
- After creating the config map we need to apply this deployment manifest to Laravel App cluster and then we need to make some changes to the deployment.yml
- Below is the changed yaml file.
deployment.yml
apiVersion: v1
kind: Service
metadata:
name: laravelappcloud-svc
namespace: default
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: laravelappcloud-pods
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: laravelappcloud-deploy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: laravelappcloud-pods
minReadySeconds: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: laravelappcloud-pods
spec:
# nodeSelector:
# doks.digitalocean.com/node-pool: k8s-pool-1-ray6v
containers:
- name: laravel-apache
image: sample-laravel/laravel-multi-tenant-apache2:dev.82
resources:
limits:
memory: 1024Mi
cpu: 1
requests:
memory: 900Mi
cpu: 800m
livenessProbe:
httpGet:
path: /probe.php
port: 80
timeoutSeconds: 5
periodSeconds: 7
failureThreshold: 4
readinessProbe:
httpGet:
path: /probe.php
port: 80
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 7
volumeMounts:
- name: volume-data
mountPath: /var/www/html
- name: cm
mountPath: /var/www/html/.env
subPath: .env
initContainers:
- name: fetch
image: sample-laravel/fetcher
command: ['sh','-c','apt update;apt install git -y; git clone --branch master https://git-username:git-token@github.com/sample/laravel-app-cloud.git html;chown -R www-data:www-data /html']
volumeMounts:
- name: volume-data
mountPath: /html
volumes:
- name: volume-data
emptyDir: {}
- name: cm
configMap:
name: laravelappcloud-env
- We need to save this file and apply to the cluster to make the changes available.
- Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
- Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to debug autoscaling pipelines.
- As usual, the best way to install Metrics Server is by using the Helm package manager.
kubectl create ns metrics-server
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server
helm repo update
helm install metrics-server metrics-server/metrics-server -n metrics-server
- To see the metrics collected by the metrics server run the following commands
kubectl top nodes
kubectl top pods -A
- Once the Metric server is installed in the cluster and with the above commands we can verify the pods and nodes resource consumption.
- Now we need to enable autoscaling with Horizontal Pod Autoscaler (HPA) to do that we need to create and apply a yaml manifest file like below and also enable the autoscale in the DIgital Ocean cluster.
- Follow this link to enable autoscale in the digital ocean cluster : HPA configureing in Dashboard
- Once the above is done from the DigitalOcean the cluster nodes will autoscale but the metrics we need to set for the autoscaler to identify the threshold once it is breached to do that we use the metric server we installed in the kubernetes cluster previously by creating and appling a yaml manifest file with the resource threshold as shown below.
hpa.yml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hpa-laravelappcloud
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: laravelappcloud-deploy
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- The above hpa.yaml has the contents and we have specified the min and max replicas this we can change as per our requirement and load in the cluster.
- The other thing that we mentioned is the resource limits like cpu and memory, this is the threshold to trigger the autoscaling.
- For the Cronjobs YAML we have to upload the sample laravel app file-system to the /var/www directory the cron images.
- We are using Jenkins and ArgoCD for the automation of Laravel App cloud.
- We use jenkins to create a Cronjob manifest and Custom domain manifest, we are creating multiple jobs like below which will perform different tasks like below:
- Cloud-Cronjob - This will create cronjonb yaml manifest file for the Laravel App tenants creation.
- Cloud-Cronjob-Delete - This will delete the cronjob yaml file which was created with the above when the tenant is deleted.
- Cloud-Custom-Domain - This will create ingress file and cronjobs for custom domain tenants on tenant creation.
- Cloud-Custom-Domain-Delete - This will delete the ingress file and the cronjob file when the custom domain tenant is deleted.
- Now below are the instructions to create the jobs in jenkins:
- First we need to create a user for Laravel App cloud in jenkins to do that follow the below steps.
- Login to jenkins we are using jenkins.example.com once logged in there will be an option manage jenkins on the left panel, click on manage jenkins.
- It will open the manage jenkins page there under Security there will be an option Manage Users click on it, this will open a page, there will a option on the top right corner as create user click on it.
- It will open the user create page where you need to give Username, Password, Fullname and Email for the user. We can create a user name as cloud user with the proper password and email.
- Once this is done we need to give permissions for the user, to do this you need to click on Configure Global Security in Manage jenkins page under Security.
- This will open a new page where you need to add your user by clicking on the adduser button as shown below and then it will ask for the user id here we need to give the user name that we created previously, once added we need to give the permissions as it was given to the cloud test user in below snap.
- First we need to create a user for Laravel App cloud in jenkins to do that follow the below steps.
* Once the above is done save the page and exit, after this we need to create tokens for this user that we created to integrate jenkins with Laravel App Billing. Follow the below steps to do it.
* First logout from the current user and login with the created user.
* Then go to manage jenkins and manage users under security.
* Then on the left cornor near delete button there will be user edit option click on it.
* It will open the user edit page over there under API TOKEN there will be an option Add New Token click on it and give a name for the token as CLOUDAUTHTOEKN and then generate the token and copy the generated token and the user name somewhere safe.
* We need to add the github credentials to the jenkins to use the github to store the created cron files and custom domain files.
* To do that follow the below steps:
* Login to the dashboard and go to Manage Jenkins then go to credentials then select the system in the list on the page and it will open a page where you will have Global credentials (unrestricted) click on it and it will open a page there will be add credentials button on the top right corner.
* Once you click on add credentials it will open a page where you can add the credentials by selecting kind as username and password and then add your github user name and password and give a reasonable ID to the same and save it.
- Now we have the necessary details for creating the jenkins job follow the below steps to create the jenkins job.
- Login to jenkins with admin credentials and on the dash board there will be an option + New Item click on it, this will open a new page for creating jobs.
- Then give the name as Cloud-Cronjob and select free style project under the project type.
- Then it will open he configuration page follow below steps to create the jenkins job this is the same procedure for all the jobs.
- Below snap is the configured job and how to configure is explained below.
- Below snap is the configured job and how to configure is explained below.
- Once the configuration page is open you can see the general configuration like in the below snap 1.
- As shown in the snap 1 select (1) This project is parameterised under Description and click on Add Parameter and (2) name - domain and (3 )default value - this should be the main domain provided in tenancy install.
- Next is the SourceCode Management, here we have our GitHub repository where we store our Yaml Manifest files. So select (4) Git and add the (5) repository url and select the (6) git credentials that we added previously and also mention the branch name of the repository where the files are present in (7) Branch Specifier. As shown in Snap 1
* Once the above is down next is to specify Build Triggers our build will be triggered from the Laravel App billing so select (8) Trigger builds remotely and on the authentication token area enter the (9) cloudauth token that we created before. As shown in Snap 2.
* Once the above is done next is the Build Environment select the (10) Delete workspace before build starts. As shown in Snap 2.
* Next is Build Steps as shown in Snap 2, under build steps select (11) Execute Shell we are going to use multiple shell commanbs to create yaml manifest file this will differ for each jobs so i will mention this at the end.
* In the Execute shell under the (12) command we need to enter the shell commands in it, i will share the commands at end.
* After adding the Build Steps we need to configure the (13) Post Build Actions here we select (14) Git Publisher under this select (15) Push only if build success under this we need to enter the (16) branch to push and (17) target remote name in git to push the changes. As shown in Snap 3
* After this we can configure the notification to notify us if the build is failed to do this select (18) Email notification under Post build actions and enter the email (19) reciptants on the box and select the (20) send notification for all unstable build. As shown in Snap 3
* Once the above is done that’s it save the job and apply it and exit as shown in (21) Save and apply in Snap 3.
- After Snap 3 the commands for the jobs are metioned please replace in the Execute shell command (12) as mentioned above.
- Jobs to be created in jenkins:
- Cloud-Cronjob - This will create cronjonb yaml manifest file for the Laravel App tenants creation.
- Cloud-Cronjob-Delete - This will delete the cronjob yaml file which was created with the above when the tenant is deleted.
- Cloud-Custom-Domain - This will create ingress file and cronjobs for custom domain tenants on tenant creation.
- Cloud-Custom-Domain-Delete - This will delete the ingress file and the cronjob file when the custom domain tenant is deleted.
- In the below commands replace your github user name and token
- Commands for the CronJobs:
Cloud-Cronjob
git clone https://replace-your-githubuser:githubtoken@github.com/sample/cloud-config.git
url=${domain}
cat <<EOF > cloud-domains/${url}.yml
apiVersion: batch/v1
kind: CronJob
metadata:
name: cron-${url}
namespace: cronjobs
spec:
schedule: "* * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: laravalappcloud-cron
image: sample-laravel/cron
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- -c
- yes A | unzip -qq /var/www/laravalcloud.zip -d /var/www/html; php artisan tenants:run schedule:run --tenants=${url} > /var/www/html/cron.txt ; cat cron.txt 2>&1
volumeMounts:
- name: volume-data
mountPath: /var/www/html
- name: cm
mountPath: /var/www/html/.env
subPath: .env
# resources:
# requests:
# cpu: "100Mi"
# limits:
# cpu: "100Mi"
restartPolicy: OnFailure
volumes:
- name: volume-data
emptyDir: {}
- name: cm
configMap:
name: laravalappcloud-env
EOF
cat cloud-domains/${url}.yml
git add .
git commit -m "${url} Ingress and Cronjob manifests Added"
Cloud-Cronjob-delete
git clone https://replace-your-githubuser:githubtoken@github.com/sample/cloud-config.git
url=${domain}
rm -f cloud-domains/${url}.yml
git add .
git commit -m "${url} Cronjob Deleted"
Cloud-Custom-Domain
git clone https://replace-your-githubuser:githubtoken@github.com/sample/cloud-config.git
url=${domain}
cat <<EOF > cloud-domains/${url}.yml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: cert-${url}
namespace: cert-manager
spec:
acme:
email: test.example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: key-${url}
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-${url}
namespace: default
annotations:
cert-manager.io/cluster-issuer: cert-${url}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-body-size: "600m"
nginx.org/client-max-body-size: "600m"
spec:
tls:
- hosts:
- ${url}
secretName: key-${url}
rules:
- host: '${url}'
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: laravalappcloud-svc
port:
number: 80
EOF
cat cloud-domains/${url}.yml
git add .
git commit -m "${url} Ingress and Cronjob manifests Added"
Cloud-Custom-Domain-Delete
git clone https://replace-your-githubuser:githubtoken@github.com/sample/cloud-config.git
url=${domain}
rm -f cloud-domains/${url}.yml
git add .
git commit -m "${url} Custom Domain Deleted"
- Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
- We use ARGOCD to deploy the YAML Manifest files in github to the kubernets cluster, to do so follow the below steps.
- Installing ArgoCD in kubernets:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
- After installing the ArgoCD in the kubernets cluster we need to create a ingress file to make it available in internet use the below yaml file to create the ingress file.
argocd-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-ingress
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-body-size: "600m"
nginx.org/client-max-body-size: "600m"
meta.helm.sh/release-name: argocd
meta.helm.sh/release-namespace: argocd
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
rules:
- host: argocd.example.com
http:
paths:
- backend:
service:
name: argocd-server
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- argocd.example.com
secretName: argocd-tls
- Use the below command to apply this configuration.
kubectl apply -f argocd-ingress.yml
- Once the above is applied check the domain host mentioned in the argocd-ingress.yml (i.e argocd.example.com).
- It will ask for the user name and password to login to the dashboard, the default user name is admin and password is added to kubernetes cluster secret and it will be encrypted from the first command get the password and enter it to the second command and you will get the password for the argocd login page. use the below commands.
kubectl get secret argocd-initial-admin-secret -n argocd -o yaml echo "<BASE64_ENCODED_STRING>" | base64 --decode
- After login to the dashboard we need to add the repository in ArgoCD GUI app as shown in below snap.
- On the right panel click on (1) Setting and it will open a tab click on (2) Repositories as shown in above image.
- Once you click on repositories it will open a new page where you will see (3) + Connect Repo click on it as shown below snap.
- Once clicked on it that will open a new page as shown below snap choose the (4) Via HTTPS connection mentod then under connect repo using https on type select (5) git give a proper name for the project on (6) Project after that add the repository https link on (7) Repository URL after that provide the github (8) username and (9) password and click on (10) CONNECT you will get connection successfully message after that click on (11) SAVE AS CREDENTIALS TEMPLATE as shown in the below snap.
- Once the above is done then we need to create Yaml Manifest files to declare the folders in the git repository that ArgoCD has to sync and apply to the kubernetes cluster, we have two folder which will be created as two applications that has to be synced with the kubernetes cluster, once is Production-cloud and other is prod-domains.
- Below file is for the production-cloud folder sync.
argocd-application.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prod-argocd-application
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/sample-app/cloud-development-env.git
targetRevision: HEAD
path: cloud-deploy
destination:
server: https://kubernetes.default.svc
syncPolicy:
automated:
selfHeal: true
prune: true
argocd-domains.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dev-domains
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/sample-app/cloud-development-env.git
targetRevision: HEAD
path: domains
destination:
server: https://kubernetes.default.svc
syncPolicy:
automated:
selfHeal: true
prune: true
- Ocne the above is done the applications will be added to the Argo CD you can verify it by login to the ArgoCD dashboard and you will see the applications similar to this snap.