Skip to content

Commit

Permalink
add k8s upgrade doc, bump k8s version, prettify docs
Browse files Browse the repository at this point in the history
  • Loading branch information
rimusz committed Mar 23, 2018
1 parent afa18bd commit 87be7ce
Show file tree
Hide file tree
Showing 9 changed files with 211 additions and 119 deletions.
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,9 @@ Upgrade the DC/OS cluster:
Change number of DC/OS agents:
* [Add/remove DC/OS agents](docs/DCOS_AGENTS.md)

Upgrade the Kubernetes cluster:
* [Upgrade Kubernetes](docs/UPGRADE_KUBERNETES.md)

## Documentation

All documentation for this project is located in the [docs](docs/) directory at the root of this repository.
Expand Down
14 changes: 7 additions & 7 deletions docs/DCOS_AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,14 @@ Edit `./hosts.yaml` and fill in the public IP addresses of your cluster agents s

To check that all instances are reachable via Ansible, run the following:

```bash
ansible all -m ping
```shell
$ ansible all -m ping
```

Finally, apply the Ansible playbook:

```bash
ansible-playbook plays/install.yml
```shell
$ ansible-playbook plays/install.yml
```

## Cloud Providers
Expand All @@ -41,7 +41,7 @@ num_of_public_agents = "1"

Then you can apply the profile with:

```bash
make launch-infra
ansible-playbook -i inventory.py plays/install.yml
```shell
$ make launch-infra
$ ansible-playbook -i inventory.py plays/install.yml
```
54 changes: 27 additions & 27 deletions docs/INSTALL_AWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@

With the following guide, you are able to install a DC/OS cluster on AWS. You need the tools Terraform and Ansible installed. On MacOS, you can use [brew](https://brew.sh/) for that.

```
brew install terraform
brew install ansible
```shell
$ brew install terraform
$ brew install ansible
```

## Setup infrastructure

### Pull down the DC/OS Terraform scripts below

```bash
make aws
```shell
$ make aws
```

### Configure your AWS ssh Keys
Expand All @@ -21,8 +21,8 @@ In the file `.deploy/desired_cluster_profile` there is a `key_name` variable. Th

When you have your key available, you can use ssh-add.

```bash
ssh-add ~/.ssh/path_to_you_key.pem
```shell
$ ssh-add ~/.ssh/path_to_you_key.pem
```

### Configure your IAM AWS Keys
Expand All @@ -32,7 +32,7 @@ http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)

Here is an example of the output when you're done:

```bash
```shell
$ cat ~/.aws/credentials
[default]
aws_access_key_id = ACHEHS71DG712w7EXAMPLE
Expand All @@ -45,7 +45,7 @@ The setup variables for Terraform are defined in the file `.deploy/desired_clust

For example, you can see the default configuration of your cluster:

```bash
```shell
$ cat .deploy/desired_cluster_profile
os = "centos_7.4"
state = "none"
Expand All @@ -66,14 +66,14 @@ admin_cidr = "0.0.0.0/0"

You can plan the profile with Terraform while referencing:

```bash
make plan
```shell
$ make plan
```

If you are happy with the changes, the you can apply the profile with Terraform while referencing:

```bash
make launch-infra
```shell
$ make launch-infra
```

## Install DC/OS
Expand All @@ -82,8 +82,8 @@ Once the components are created, we can run the Ansible script to install DC/OS

The setup variables for DC/OS are defined in the file `group_vars/all`. Copy the example file, by running:

```
cp group_vars/all.example group_vars/all
```shell
$ cp group_vars/all.example group_vars/all
```

The now created file `group_vars/all` is for configuring DC/OS. The variables are explained within the file.
Expand All @@ -101,40 +101,40 @@ dcos_s3_bucket: 'YOUR_BUCKET_NAME'

Ansible also needs to know how to find the instances that got created via Terraform. For that we you run a dynamic inventory script called `./inventory.py`. To use it specify the script with the parameter `-i`. In example, check that all instances are reachable via Ansible:

```
ansible all -i inventory.py -m ping
```shell
$ ansible all -i inventory.py -m ping
```

Finally, you can install DC/OS by running:

```
ansible-playbook -i inventory.py plays/install.yml
```shell
$ ansible-playbook -i inventory.py plays/install.yml
```

## Access the cluster

If the installation was successful. You should be able to reach the Master load balancer. You can find the URL of the Master LB with the following command:

```
make ui
```shell
$ make ui
```

Setup `dcos` cli to access your cluster:

```
make setup-cli
```shell
$ make setup-cli
```

The terraform script also created a load balancer for the public agents:

```
make public-lb
```shell
$ make public-lb
```

## Destroy the cluster

To delete the AWS stack run the command:

```
make destroy
```shell
$ make destroy
```
56 changes: 28 additions & 28 deletions docs/INSTALL_AZURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,25 +2,25 @@

With the following guide, you are able to install a DC/OS cluster on Azure. You need the tools Terraform and Ansible installed. On MacOS, you can use [brew](https://brew.sh/) for that.

```
brew install terraform
brew install ansible
```shell
$ brew install terraform
$ brew install ansible
```

## Setup infrastructure

### Pull down the DC/OS Terraform scripts below

```bash
make azure
```shell
$ make azure
```

### Configure your Azure ssh Keys

Set the private key that you will be you will be using to your ssh-agent and set public key in terraform.

```bash
ssh-add ~/.ssh/your_private_azure_key.pem
```shell
$ ssh-add ~/.ssh/your_private_azure_key.pem
```

Add your Azure ssh key to `.deploy/desired_cluster_profile` file:
Expand All @@ -35,7 +35,7 @@ Follow the Terraform instructions [here](https://www.terraform.io/docs/providers
When you've successfully retrieved your output of `az account list`, create a source file to easily run your credentials in the future.


```bash
```shell
$ cat ~/.azure/credentials
export ARM_TENANT_ID=45ef06c1-a57b-40d5-967f-88cf8example
export ARM_CLIENT_SECRET=Lqw0kyzWXyEjfha9hfhs8dhasjpJUIGQhNFExAmPLE
Expand All @@ -47,7 +47,7 @@ export ARM_SUBSCRIPTION_ID=846d9e22-a320-488c-92d5-41112example

Set your environment variables by sourcing the files before you run any terraform commands.

```bash
```shell
$ source ~/.azure/credentials
```

Expand All @@ -57,7 +57,7 @@ The setup variables for Terraform are defined in the file `.deploy/desired_clust

For example, you can see the default configuration of your cluster:

```bash
```shell
$ cat .deploy/desired_cluster_profile
os = "centos_7.3"
state = "none"
Expand All @@ -77,14 +77,14 @@ admin_cidr = "0.0.0.0/0"

You can plan the profile with Terraform while referencing:

```bash
make plan
```shell
$ make plan
```

If you are happy with the changes, the you can apply the profile with Terraform while referencing:

```bash
make launch-infra
```shell
$ make launch-infra
```

## Install DC/OS
Expand All @@ -95,8 +95,8 @@ You have to add the private SSH key (defined in Terraform with variable `ssh_key

The setup variables for DC/OS are defined in the file `group_vars/all`. Copy the example file, by running:

```
cp group_vars/all.example group_vars/all
```shell
$ cp group_vars/all.example group_vars/all
```

The now created file `group_vars/all` is for configuring DC/OS. The variables are explained within the file.
Expand All @@ -112,40 +112,40 @@ dcos_exhibitor_azure_account_key: '******'

Ansible also needs to know how to find the instances that got created via Terraform. For that we you run a dynamic inventory script called `./inventory.py`. To use it specify the script with the parameter `-i`. In example, check that all instances are reachable via Ansible:

```
ansible all -i inventory.py -m ping
```shell
$ ansible all -i inventory.py -m ping
```

Finally, you can install DC/OS by running:

```
ansible-playbook -i inventory.py plays/install.yml
```shell
$ ansible-playbook -i inventory.py plays/install.yml
```

## Access the cluster

If the installation was successful. You should be able to reach the Master load balancer. You can find the URL of the Master LB with the following command:

```
make ui
```shell
$ make ui
```

Setup `dcos` cli to access your cluster:

```
make setup-cli
```shell
$ make setup-cli
```

The terraform script also created a load balancer for the public agents:

```
make public-lb
```shell
$ make public-lb
```

## Destroy the cluster

To delete the Azure stack run the command:

```
make destroy
```shell
$ make destroy
```
Loading

0 comments on commit 87be7ce

Please sign in to comment.