Skip to content
This repository has been archived by the owner on Aug 30, 2022. It is now read-only.

Commit

Permalink
Improve README
Browse files Browse the repository at this point in the history
  • Loading branch information
achetronic committed Jul 12, 2022
1 parent dd551ae commit ee32289
Show file tree
Hide file tree
Showing 5 changed files with 89 additions and 33 deletions.
94 changes: 65 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,22 +12,35 @@
| +----------------------+
|
+-----------------------+ +--------------------+ | +----------------------+
| 1. Automated EKS/GKE +------>| 2. FluxCD Template +---+----->| 4. Monitoring Stack |
| 1. Automated K8s +------>| 2. FluxCD Template +---+----->| 4. Monitoring Stack |
+-----------------------+ +--------------------+ | +----------------------+
|
| +----------------------+
+----->| 5. Applications |
+----------------------+
```

**[1] Automated K8S:** This stage consists of a Kubernetes cluster which is created and automated using GitOps approach.
We cover this stage with several repositories ready for different cloud providers:

* [Automated EKS](https://github.com/prosimcorp/automated-eks)
* [Automated GKE](https://github.com/prosimcorp/automated-gke)
* [Automated DO](https://github.com/prosimcorp/automated-do)

> This stage can be covered for different cloud providers or with different technologies. Don't hesitate to code them
> if needed
**[2] [FluxCD Template](https://github.com/prosimcorp/fluxcd-template)**

**[3] [Tooling Stack](https://github.com/prosimcorp/tooling-stack)**

**[4] [Monitoring Stack](https://github.com/prosimcorp/monitoring-stack)**

## Description

Stack to deploy those tools you need inside the cluster (to deploy applications), which are not included by default
in Kubernetes.

This is the stack you need to be deployed before the
[Monitoring Stack](https://github.com/prosimcorp/monitoring-stack)

## Motivation

As SREs, we need reliability on the operators we use, and some tools must be configured to work together.
Expand All @@ -45,23 +58,26 @@ This stack is made on top of well tested tools, but this does NOT mean zero spac
Some newer tools will be tested on separated branches if we detect they help the developers or SRE members to work easier
and faster.

While polishing any tool inside the stack, some parameters could differ between what we consider `develop` environment, and
`production`: all the changes are always tested on `develop` environment first and stabilized, before promoting them
to `production` when changes become solid.

By deploying the stack with `develop` parameters you can test all the stack with the latest changes, and collaborate
to improve it.
While polishing any tool inside the stack, some parameters could differ between what we consider `develop` environment,
and `production`. All the changes are always tested on `develop` first, and stabilized before promoting them to `production`
when changes become solid.

We can not cover the specific use case for all the cloud providers. For that reason we have decided to keep the scope
limited to the major three or four, and depending on the time we can spend on this project, increase the number in the future.
limited to the major ones, and depending on the time we can spend on this project, increase the number in the future.
The problem here is not to maintain the agnostic operators or controllers with the most sane config parameters, but
those that are attached directly to a provider, like CSI controllers, that force us to do some magic tricks:
**cough! cough! Amazon**
those that are attached directly to a provider, like CSI controllers when not included by default on some provider,
that force us to do some magic tricks _**cough! cough! Amazon**_

**Supported cloud providers:**

- Amazon Web Services `aws`
- Google Cloud Platform `gcp`
- DigitalOcean `do`

## Some requirements here

> All the following requirements are created inside Kubernetes on cluster creation if you create the cluster using
> [Automated EKS](https://github.com/prosimcorp/automated-eks)
> any flavor of [Automated K8S](./README.md#important)
### IAM permissions

Expand Down Expand Up @@ -108,24 +124,22 @@ metadata:
name: cluster-info
namespace: kube-system
data:
# The provider. Available values are: AWS, GCP
# The provider. Available values are: AWS, GCP, DO
provider: AWS
# Project ID on GCP, or Account number on AWS
account: "111111111111"
# Project ID on GCP, account number on AWS, or project name on DO
account: "111111111111"
region: eu-west-1
name: your-kubernetes-cluster-name
```
## How to deploy manually
⚠️Let's start clarifying that the purpose for this project is not to be deployed by hand but deployed by using GitOps,
with tools like FluxCD or ArgoCD.
> ⚠️The purpose for this project is not to be deployed by hand but deployed by using GitOps,
> with tools like FluxCD or ArgoCD.
This repository is composed by several projects that can be deployed using `Kustomize` and some others that use `Helm`.
They are all allocated inside `deploy` directory, and you can deploy them all step by step by entering one directory,
deploying its content and then going to the next.

Now, please deploy this by using automation tools. Deeper information in the following sections 🙂
This repository is composed by deployments for several projects. Some ones can be deployed using `Kustomize` and some
others that use `Helm`. They are all allocated inside `deploy` directory, and you can deploy them all, step by step,
by entering one directory, deploying its content and then moving to the next.

## How to deploy using Flux

Expand Down Expand Up @@ -160,19 +174,38 @@ metadata:
spec:
interval: 10m
retryInterval: 1m
path: ./fluxcd/develop
path: ./fluxcd/production/do
prune: false
sourceRef:
kind: GitRepository
name: tooling-stack
namespace: flux-system
```

Pay special attention to the `spec.path` because there is where the desired stack will be defined, setting
the parameter to `./flux/develop` or to `./flux/production`
Pay special attention to the `spec.path` parameter. There is where the desired stack is defined, setting
the parameter according to the following pattern `./fluxcd/{environment}/{provider}`

**Supported values:**

Anyway, as described previously, we did a template for you, with everything already configured on
[FluxCD Template](https://github.com/prosimcorp/fluxcd-template), to make it easier to start.
- Environments: `develop`, `production`
- Cloud providers: `aws`, `gcp`, `do`

Examples:
- Production values on DigitalOcean:

```yaml
...
path: ./fluxcd/production/do
```

- Develop values on GCP:
```yaml
...
path: ./fluxcd/develop/gcp
```

Anyway, as described previously, we did a template for you with larger documentation to make it easier to start:
[FluxCD Template](https://github.com/prosimcorp/fluxcd-template).

## Troubleshooting

Expand Down Expand Up @@ -215,6 +248,9 @@ and your responsibility is only updating the manifests for External Secrets 😎

## How to collaborate

> By deploying the stack with `develop` parameters you can test all the stack with the latest changes, and collaborate
> to improve it.

1. Open an issue and discuss the problem to find together the best way to solve it
2. Fork the repository, create a branch and change inside everything you need
3. Launch a cluster pointing tooling-stack to your repository with the changes
Expand Down
10 changes: 8 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,19 @@
## IAM

As you can see in the [global documentation](/README.md), some IAM roles and policies are needed to be created on the
selected cloud provider for this stack to work properly. We decided to document them all separately in order to make
cloud provider for this stack to work properly. We decided to document them all separately in order to make
it easier to understand them.

The documentation for each policy can be read on their specific documentation:

- AWS
- [External DNS](./aws/iam/external-dns.md)
- [CSI Driver](./aws/iam/csi-driver.md)

- GCP
- [External DNS](./gcp/iam/external-dns.md)
- [CSI Driver](./gcp/iam/csi-driver.md)
- [CSI Driver](./gcp/iam/csi-driver.md)

- DigitalOcean
- [External DNS](./do/iam/external-dns.md)
- [CSI Driver](./do/iam/csi-driver.md)
4 changes: 4 additions & 0 deletions examples/do/iam/csi-driver.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
## CSI Driver (DigitalOcean Block Storage)

For DigitalOcean, Kubernetes clusters deploy the CSI driver by default.
Because of that, we don't include its deployment in this stack.
10 changes: 10 additions & 0 deletions examples/do/iam/external-dns.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
## External DNS

DigitalOcean does not offer an IAM service. Instead of that, they use API tokens to communicate directly using the API.

To be able to communicate with the API, External DNS needs to know and use that token, so you must provide it as a
Kubernetes Secret resource as follows:

```console
kubectl create secret generic external-dns-environment -n external-dns --from-literal DO_TOKEN="your-token-here"
```
4 changes: 2 additions & 2 deletions examples/gcp/iam/csi-driver.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## CSI Driver (GCE Persistent Disk)

For Google Cloud Provider, GKE clusters deploy the CSI driver by default (that is the right path). Because of that,
we don't include its deployment in this stack.
For Google Cloud Provider, GKE clusters deploy the CSI driver by default.
Because of that, we don't include its deployment in this stack.

0 comments on commit ee32289

Please sign in to comment.