drone-eks-deploy
is a Drone-CI plugin that allows you to apply Kubernetes
manifests to EKS-based clusters.
This is a typical statement using the plugin:
pipeline:
deploy:
image: caian/drone-eks-plugin
cluster: arn:aws:eks:us-east-1:001122334455:cluster/cluster-name
node_role: arn:aws:iam::001122334455:role/eks-node-role
manifest: k8s/deployment.yml
secrets: [ aws_access_key, aws_secret_key ]
when:
branch: master
event: tag
The image is publicly available at the Docker Hub in caian/drone-eks-plugin. Alternatively, you can build your own image and push it to your registry:
$ docker build -t drone-eks-deploy .
Parameter | Required? | Description |
---|---|---|
CLUSTER |
Yes | The EKS cluster's ARN. |
MANIFEST |
Yes | The k8s manifest file path. |
NODE_ROLE |
Yes | The k8s node AMI role ARN. |
AWS_REGION |
No | The EKS cluster's region. |
The AWS_REGION
parameter will be set by default to the Drone's agent region.
drone-eks-deploy
requires a set of AWS credentials (the access
and secret keys). These credentials must have enough permissions to
perform the desired changes at the EKS cluster.
The access and secret keys can be injected into the container via Drone's
secrets. It is important to notice that the secrets must be
named according to the environment variables the awscli
looks
for. If you manage multiple AWS credentials within Drone-CI, you
can use alternate names. Example:
pipeline:
deploy-staging:
secrets:
- aws_access_key: stg_aws_access_key
aws_secret_key: stg_aws_secret_key
deploy-production:
secrets:
- aws_access_key: prd_aws_access_key
aws_secret_key: prd_aws_secret_key
As stated at the parameters section of this document,
drone-eks-deploy
requires the ARN (Amazon Resource Name) of the EKS node
role. The IAM role typically comprises the following policies:
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
This role (the EKS node role) must be able to be assumed by the Drone agent. In AWS, this means that the EC2 instance that runs Drone must have a role allowing the "assume role" of the EKS node role.
Supposing an EKS node role named "eks-node-role
" on an account with id
"012345678901
", this could be accomplished by the following statement:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::012345678901:role/eks-node-role"
}
]
}
The EKS node role will in turn require a trust relationship, allowing the resource (namely, the EC2 instance that runs the Drone agent) and the account itself to assume the role (the EKS node role).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::012345678901:root"
},
"Action": "sts:AssumeRole"
}
]
}
Finally, the Drone agent must be able to describe (get information about) the
specified EKS cluster (at the CLUSTER
parameter).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "eks:DescribeCluster",
"Resource": "*"
}
]
}
Kubernetes must also be configured to allow changes received from the user whose credentials are being used to perform the manifest appliances. One approach to that is to bound an AWS user to a group. This group will be then subject to a RBAC declaration, allowing the user to perform the necessary actions within the cluster.
To begin, provided you have the awscli
configured, you can
easily update your kubeconfig to add the context of your EKS
cluster.
$ aws eks update-kubeconfig --name cluster-name
It is important to notice, however, that the AWS user used in this approach
must be the same that have originated the EKS cluster. In the examples
below, the "k8sadmin
" user will be considered as the creator of the cluster.
The aws-auth
ConfigMap can be used in order to bound the
AWS user to a group.
$ kubectl edit -n kube-system configmap/aws-auth
Inside the configuration file, at the data
key, include a mapUsers
section.
At the example below, the k8sadmin
user will be bound to the "deployer
"
group.
apiVersion: v1
data:
mapUsers: |
- userarn: arn:aws:iam::012345678901:user/k8sadmin
username: k8sadmin
groups:
- deployer
The ConfigMap statement must contain the user name, as well it's ARN.
Once the user k8admin
is now part of the group deployer
, the RBAC must be
applied to the cluster to finish the authorization. The RBAC below is comprised
of two statements: a ClusterRole
and a ClusterRoleBinding
.
The ClusterRole
statement defines a cluster role named drone-deployer
with
a given list of authorized verbs and resources. The ClusterRoleBinding
statement then binds the deployer
group to the drone-deployer
role, thus,
authorizing the k8sadmin
to apply the given manifest (at the MANIFEST
parameter).
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: drone-deployer
rules:
- apiGroups:
- extensions
resources:
- deployments
verbs:
- get
- list
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: drone-deployer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: drone-deployer
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: deployer