-
Notifications
You must be signed in to change notification settings - Fork 1
Spin Up Platform
So you'd like to run the platform would you? This guide will tell you which commands to run, where and why.
- Kubernetes, managed using kops
- 'Platform' itself, composed of the API container and the cron worker container
- Github Oauth
- A Database, Postgres preferred
We're going to install kops
on our local machine and use it with config stored in the s3://k8s-reconfigureio-infra
bucket to spin up a kubernetes cluster.
kops
depends on kubectl
so follow the kubectl installation instructions
Since I (Max) am using a linux distro that supports Snap the install process becomes:
sudo snap install kubectl --classic
Next up we need to install kops
using the kops installation instructions
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops
Next we need to tell kops
to look at our S3 bucket for config. Using awscli, make sure you can access this bucket with
aws s3 ls s3://k8s-reconfigureio-infra
If you can't, you need to set up your IAM access keys - you'll want the same permissions as the 'campgareth' account. If you dont have keys, generate them and set the up for your console as per https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html. Use us-east-1 as default region.
export KOPS_STATE_STORE=s3://k8s-reconfigureio-infra
kubectl
is used for managing a running k8s cluster so we need to get kops
to build kubectl
's config file:
kops export kubecfg k8s.reconfigure.io
Right, now we have all the tooling set up, let's actually manage the cluster! What you'll find in the S3 bucket is a bunch of config for a production k8s cluster apart from one key setting, the number of instances to run as part of the cluster is set to 0. What we're going to do next is edit this config so we have at least one master node and at least one worker node.
kops edit instancegroup master-us-east-1b
This command will open up the config file for the master nodes and it should look like this:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-08-17T15:15:18Z
labels:
kops.k8s.io/cluster: k8s.reconfigure.io
name: master-us-east-1b
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: m3.medium
maxSize: 0
minSize: 0
role: Master
subnets:
- us-east-1b
Change maxSize
and minSize
to 1
then save and exit.
Next we do the same for the worker nodes.
kops edit instancegroup nodes
This command should open up the config file for the worker nodes and it should look like this:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-08-17T15:15:19Z
labels:
kops.k8s.io/cluster: k8s.reconfigure.io
name: nodes
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: t2.medium
maxSize: 0
minSize: 0
role: Node
subnets:
- us-east-1b
Change maxSize
and minSize
to 2
then save and exit.
Make reality match this config with kops update cluster --yes
(omit the --yes
to see what changes will be made)
You should be able to see the instances start up in the EC2 console.
KNOWN ISSUE - NEEDS FIXING: second kube2iam
is failing to start
When they're up and running, continue below...
First check what's running on Kubernetes using kubectl get pods
, if there isn't a running pod with a name similar to production-platform-web
then we'll need to compile platform and deploy it to k8s.
To do this we'll need to install docker
and docker-compose
. Then, inside a checkout of https://github.com/ReconfigureIO/platform, run make dependencies
(needs glide) and then:
Docker installation instructions for Ubuntu Docker-compose installation instructions
docker-compose run --rm web-base make clean
docker-compose down
docker network prune --force
docker-compose run --rm web-base make all
make image
make push-image migrate-staging deploy-staging migrate-production deploy-production
To run it (on prem version) :
docker-compose up -f docker-compose.on-prem.yml up
When you see web_1 | [GIN-debug] Listening and serving HTTP on :8080
, you can test by going to http://localhost:8080.
before exectuting the given reco login, set up your command line with
export PLATFORM_SERVER=http://localhost:8080
TODO: Spin up a temporary database on k8s rather than keep our old production database around since it's one of the most expensive things on our AWS account.
To spin down docker-compose -f docker-compose.on-prem.yml down --remove-orphans
Login docker to ecr:
aws ecr get-login --no-include-email
Pull the compiler image:
docker image pull 398048034572.dkr.ecr.us-east-1.amazonaws.com/reconfigureio/build-framework/sdaccel-builder:v0.18.7
You'll need to have installed vivado 2017.1 on on hosting machine for build to work.
If the Reconfigure.io
Github Organisation still exists we should be fine as our platform is registered as an OAuth Application
through that organisation.
TODO: Fill in this guide as if that application did not exist already - Pointers: platform/k8s/production/config.yaml GITHIB_CLIENT_*, platform/docker-compose.yaml
Let's put a copy of Vivado 2017.1 in S3. Not the installer, just the whole of /opt/Xilinx from an AWS FPGA Dev AMI running Vivado 2017.1, tarred. By doing this we allow anyone running a Centos container to pull in a full install of Vivado, minus licenses. This could be used for e.g. running our full go-to-bitstream compiler workflow on a developer's laptop. We'd also need licenses, those are stored in a license server container that we used to run on jenkins, however we could pull this into any docker-equipped environment.
This would work for anyone including those outside of AWS's estate, but it isn't the fastest route for those inside AWS's estate. For those machines a faster route would be to mount an EBS volume containing vivado into running compiler containers. This can apparently be done through Kubernetes Volumes, however one EBS volume can only be attached to one worker node (instance) at a time.