Check out the post.
Build the images and spin up the containers:
$ docker-compose up -d --build
Run the migrations and seed the database:
$ docker-compose exec web knex migrate:latest
$ docker-compose exec web knex seed:run
Test it out at:
Install the Google Cloud SDK, run gcloud init
to configure it, and then either pick an existing GCP project or create a new project to work with.
Set the project:
$ gcloud config set project <PROJECT_ID>
Install kubectl
:
$ gcloud components install kubectl
Create a cluster on Kubernetes Engine:
$ gcloud container clusters create node-kubernetes \
--num-nodes=3 --zone us-central1-a --machine-type g1-small
Connect the kubectl
client to the cluster:
$ gcloud container clusters get-credentials node-kubernetes --zone us-central1-a
Build and push the image to the Container Registry:
$ gcloud auth configure-docker
$ docker build -t gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1 .
$ docker push gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1
Create the secret object:
$ kubectl apply -f ./kubernetes/secret.yaml
Create a Persistent Disk:
$ gcloud compute disks create pg-data-disk --size 50GB --zone us-central1-a
Create the volume:
$ kubectl apply -f ./kubernetes/volume.yaml
Create the volume claim:
$ kubectl apply -f ./kubernetes/volume-claim.yaml
Create deployment:
$ kubectl create -f ./kubernetes/postgres-deployment.yaml
Create the service:
$ kubectl create -f ./kubernetes/postgres-service.yaml
Create the database:
$ kubectl get pods
$ kubectl exec <POD_NAME> --stdin --tty -- createdb -U sample todos
Update the image name kubernetes/node-deployment-updated.yaml and then create the deployment:
$ kubectl create -f ./kubernetes/node-deployment-updated.yaml
Create the service:
$ kubectl create -f ./kubernetes/node-service.yaml
Apply the migration and seed the database:
$ kubectl get pods
$ kubectl exec <POD_NAME> knex migrate:latest
$ kubectl exec <POD_NAME> knex seed:run
Grab the external IP:
$ kubectl get service node
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
node LoadBalancer 10.39.244.136 35.232.249.48 3000:30743/TCP 2m
Test it out:
Remove the resources once done:
$ kubectl delete -f ./kubernetes/node-service.yaml
$ kubectl delete -f ./kubernetes/node-deployment-updated.yaml
$ kubectl delete -f ./kubernetes/secret.yaml
$ kubectl delete -f ./kubernetes/volume.yaml
$ kubectl delete -f ./kubernetes/volume-claim.yaml
$ kubectl delete -f ./kubernetes/postgres-deployment.yaml
$ kubectl delete -f ./kubernetes/postgres-service.yaml
$ gcloud container clusters delete node-kubernetes --zone us-central1-a
$ gcloud compute disks delete pg-data-disk --zone us-central1-a
$ gcloud container images delete gcr.io/<PROJECT_ID>/node-kubernetes:v0.0.1