Skip to content

Commit

Permalink
Sync docs from Discourse (#494)
Browse files Browse the repository at this point in the history
Co-authored-by: GitHub Actions <41898282+github-actions[bot]@users.noreply.github.com>
  • Loading branch information
github-actions[bot] authored Sep 5, 2024
1 parent 335674c commit a22db5d
Show file tree
Hide file tree
Showing 50 changed files with 439 additions and 190 deletions.
File renamed without changes.
29 changes: 26 additions & 3 deletions docs/explanation/e-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,20 @@ root@mysql-k8s-0:/# ls -lahR /var/log/mysql
total 28K
drwxr-xr-x 1 mysql mysql 4.0K Oct 23 20:46 .
drwxr-xr-x 1 root root 4.0K Sep 27 20:55 ..
drwxrwx--- 2 mysql mysql 4.0K Oct 23 20:46 archive_audit
drwxrwx--- 2 mysql mysql 4.0K Oct 23 20:46 archive_error
drwxrwx--- 2 mysql mysql 4.0K Oct 23 20:46 archive_general
drwxrwx--- 2 mysql mysql 4.0K Oct 23 20:45 archive_slowquery
-rw-r----- 1 mysql mysql 1.2K Oct 23 20:46 error.log
-rw-r----- 1 mysql mysql 1.7K Oct 23 20:46 general.log

/var/snap/charmed-mysql/common/var/log/mysql/archive_audit:
total 452K
drwxrwx--- 2 snap_daemon snap_daemon 4.0K Sep 3 01:49 .
drwxr-xr-x 6 snap_daemon root 4.0K Sep 3 01:49 ..
-rw-r----- 1 snap_daemon root 43K Sep 3 01:24 audit.log-20240903_0124
-rw-r----- 1 snap_daemon root 109K Sep 3 01:25 audit.log-20240903_0125

/var/log/mysql/archive_error:
total 20K
drwxrwx--- 2 mysql mysql 4.0K Oct 23 20:46 .
Expand All @@ -36,6 +44,21 @@ drwxrwx--- 2 mysql mysql 4.0K Oct 23 20:45 .
drwxr-xr-x 1 mysql mysql 4.0K Oct 23 20:46 ..
```

The following is a sample of the audit logs, with format json with login/logout records:

```json
{"audit_record":{"name":"Connect","record":"17_2024-09-03T01:52:14","timestamp":"2024-09-03T01:53:14Z","connection_id":"988","status":1156,"user":"","priv_user":"","os_login":"","proxy_user":"","host":"juju-da2225-8","ip":"10.207.85.214","db":""}}
{"audit_record":{"name":"Connect","record":"18_2024-09-03T01:52:14","timestamp":"2024-09-03T01:53:14Z","connection_id":"989","status":0,"user":"serverconfig","priv_user":"serverconfig","os_login":"","proxy_user":"","host":"juju-da2225-8","ip":"10.207.85.214","db":""}}
{"audit_record":{"name":"Quit","record":"1_2024-09-03T01:53:14","timestamp":"2024-09-03T01:53:14Z","connection_id":"989","status":0,"user":"serverconfig","priv_user":"serverconfig","os_login":"","proxy_user":"","host":"juju-da2225-8","ip":"10.207.85.214","db":""}}
{"audit_record":{"name":"Connect","record":"2_2024-09-03T01:53:14","timestamp":"2024-09-03T01:53:33Z","connection_id":"990","status":1156,"user":"","priv_user":"","os_login":"","proxy_user":"","host":"juju-da2225-8","ip":"10.207.85.214","db":""}}
{"audit_record":{"name":"Connect","record":"3_2024-09-03T01:53:14","timestamp":"2024-09-03T01:53:33Z","connection_id":"991","status":0,"user":"serverconfig","priv_user":"serverconfig","os_login":"","proxy_user":"","host":"juju-da2225-8","ip":"10.207.85.214","db":""}}
{"audit_record":{"name":"Quit","record":"4_2024-09-03T01:53:14","timestamp":"2024-09-03T01:53:33Z","connection_id":"991","status":0,"user":"serverconfig","priv_user":"serverconfig","os_login":"","proxy_user":"","host":"juju-da2225-8","ip":"10.207.85.214","db":""}}
{"audit_record":{"name":"Connect","record":"5_2024-09-03T01:53:14","timestamp":"2024-09-03T01:53:33Z","connection_id":"992","status":0,"user":"clusteradmin","priv_user":"clusteradmin","os_login":"","proxy_user":"","host":"localhost","ip":"","db":""}}
{"audit_record":{"name":"Quit","record":"6_2024-09-03T01:53:14","timestamp":"2024-09-03T01:53:33Z","connection_id":"992","status":0,"user":"clusteradmin","priv_user":"clusteradmin","os_login":"","proxy_user":"","host":"localhost","ip":"","db":""}}
{"audit_record":{"name":"Connect","record":"7_2024-09-03T01:53:14","timestamp":"2024-09-03T01:53:33Z","connection_id":"993","status":1156,"user":"","priv_user":"","os_login":"","proxy_user":"","host":"juju-da2225-8","ip":"10.207.85.214","db":""}}
{"audit_record":{"name":"Connect","record":"8_2024-09-03T01:53:14","timestamp":"2024-09-03T01:53:33Z","connection_id":"994","status":0,"user":"serverconfig","priv_user":"serverconfig","os_login":"","proxy_user":"","host":"juju-da2225-8","ip":"10.207.85.214","db":""}}
```

The following is a sample of the error logs, with format `time thread [label] [err_code] [subsystem] msg`:

```shell
Expand Down Expand Up @@ -100,19 +123,19 @@ SET timestamp=1698099752;
do sleep(15);
```
The charm currenly has error and general logs enabled by default, while slow query logs are disabled by default. All of these files are rotated if present into a separate dedicated archive folder under the logs directory.
The charm currently has error and general logs enabled by default, while slow query logs are disabled by default. All of these files are rotated if present into a separate dedicated archive folder under the logs directory.
We do not yet support the rotation of binary logs (binlog, relay log, undo log, redo log, etc).
## Log Rotation Configurations
For each log (error, general and slow query):
For each log (audit, error, general and slow query):
- The log file is rotated every minute (even if the log files are empty)
- The rotated log file is formatted with a date suffix of `-%V-%H%M` (-weeknumber-hourminute)
- The rotated log files are not compressed or mailed
- The rotated log files are owned by the `snap_daemon` user and group
- The rotated log files are retained for a maximux of 7 days before being deleted
- The rotated log files are retained for a maximum of 7 days before being deleted
- The most recent 10080 rotated log files are retained before older rotated log files are deleted
The following are logrotate config values used for log rotation:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# Clients for Async replication
> **WARNING**: it is an '8.0/candidate' article. Do NOT use it in production!<br/>Contact [Canonical Data Platform team](/t/11868) if you are interested in the topic.

## Pre-requisits
Make sure both `Rome` and `Lisbon` Clusters are deployed using the [Async Deployment manual](/t/13458)!
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
# Deploy Async replication
[note type="caution"]
**Warning**: This feature is for charm revision `8.0/edge`. Do NOT use it in production!

Contact the [Canonical Data Platform team](/t/11868) if you are interested in this topic.
[/note]

## Deploy

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# Switchover / Failover of Async replication

> **WARNING**: it is an '8.0/edge' article. Do NOT use it in production!<br/>Contact [Canonical Data Platform team](/t/11868) if you are interested in the topic.
## Pre-requisits

Make sure both `Rome` and `Lisbon` Clusters are deployed using the [Async Deployment manual](/t/13458)!
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# Recovery of Async replication
> **WARNING**: it is an '8.0/candidate' article. Do NOT use it in production!<br/>Contact [Canonical Data Platform team](/t/11868) if you are interested in the topic.

## Pre-requisits
Make sure both `Rome` and `Lisbon` Clusters are deployed using the [Async Deployment manual](/t/13458)!
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# Removal of Async replication

> **WARNING**: it is an '8.0/edge' article. Do NOT use it in production!<br/>Contact [Canonical Data Platform team](/t/11868) if you are interested in the topic.
## Pre-requisits

Make sure both `Rome` and `Lisbon` Clusters are deployed using the [Async Deployment manual](/t/13458)!
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
167 changes: 167 additions & 0 deletions docs/how-to/h-deploy-terraform.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
# How to deploy using Terraform

[Terraform](https://www.terraform.io/) is an infrastructure automation tool to provision and manage resources in clouds or data centers. To deploy Charmed MySQL K8s using Terraform and Juju, you can use the [Juju Terraform Provider](https://registry.terraform.io/providers/juju/juju/latest).

The easiest way is to start from [these examples of terraform modules](https://github.com/canonical/terraform-modules) prepared by Canonical. This page will guide you through a deployment using an example module for MySQL on Kubernetes.

For an in-depth introduction to the Juju Terraform Provider, read [this Discourse post](https://discourse.charmhub.io/t/6939).

[note]
**Note**: Storage support was added in [Juju Terraform Provider version 0.13+](https://github.com/juju/terraform-provider-juju/releases/tag/v0.13.0).
[/note]

## Summary
* [Install Terraform tooling](#install-terraform-tooling)
* [Verify the deployment](#verify-the-deployment)
* [Apply the deployment](#apply-the-deployment)
* [Check deployment status](#check-deployment-status)
* [Clean up](#clean-up)
---

## Install Terraform tooling

This guide assumes Juju is installed and you have a K8s controller already bootstrapped. For more information, check the [Set up the environment](/t/9679) tutorial page.

Let's install Terraform Provider and example modules:
```shell
sudo snap install terraform --classic
```
Switch to the K8s provider and create a new model:
```shell
juju switch microk8s
juju add-model my-model
```
Clone examples and navigate to the MySQL machine module:
```shell
git clone https://github.com/canonical/terraform-modules.git
cd terraform-modules/modules/k8s/mysql
```

Initialise the Juju Terraform Provider:
```shell
terraform init
```

## Verify the deployment

Open the `main.tf` file to see the brief contents of the Terraform module:

```tf
resource "juju_application" "k8s_mysql" {
name = var.mysql_application_name
model = var.juju_model_name
trust = true
charm {
name = "mysql-k8s"
channel = var.mysql_charm_channel
}
units = 1
}
```

Run `terraform plan` to get a preview of the changes that will be made:

```shell
terraform plan -var "juju_model_name=my-model"
```

## Apply the deployment

If everything looks correct, deploy the resources (skip the approval):

```shell
terraform apply -auto-approve -var "juju_model_name=my-model"
```

## Check deployment status

Check the deployment status with

```shell
juju status --model k8s:my-model --watch 1s
```

Sample output:

```shell
Model Controller Cloud/Region Version SLA Timestamp
my-model k8s-controller microk8s/localhost 3.5.3 unsupported 12:37:25Z

App Version Status Scale Charm Channel Rev Address Exposed Message
mysql-k8s 8.0.36-0ubuntu0.22.04.1 active 1 mysql-k8s 8.0/stable 153 10.152.183.112 no

Unit Workload Agent Address Ports Message
mysql-k8s/0* active idle 10.1.77.76 Primary

```

Continue to operate the charm as usual from here or apply further Terraform changes.

## Clean up

To keep the house clean, remove the newly deployed Charmed PostgreSQL by running
```shell
terraform destroy -var "juju_model_name=my-model"
```

Sample output:
```shell
juju_application.k8s_mysql: Refreshing state... [id=terra-k8s:mysql-k8s]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy

Terraform will perform the following actions:

# juju_application.k8s_mysql will be destroyed
- resource "juju_application" "k8s_mysql" {
- constraints = "arch=amd64" -> null
- id = "terra-k8s:mysql-k8s" -> null
- model = "terra-k8s" -> null
- name = "mysql-k8s" -> null
- placement = "" -> null
- storage = [
- {
- count = 1 -> null
- label = "database-1" -> null
- pool = "kubernetes" -> null
- size = "1G" -> null
},
] -> null
- trust = true -> null
- units = 1 -> null

- charm {
- base = "ubuntu@22.04" -> null
- channel = "8.0/stable" -> null
- name = "mysql-k8s" -> null
- revision = 153 -> null
- series = "jammy" -> null
}
}

Plan: 0 to add, 0 to change, 1 to destroy.

Changes to Outputs:
- application_name = "mysql-k8s" -> null

Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value: yes

juju_application.k8s_mysql: Destroying... [id=terra-k8s:mysql-k8s]
juju_application.k8s_mysql: Destruction complete after 0s

Destroy complete! Resources: 1 destroyed.
```
---

[note]
For more examples of Terraform modules for K8s, see the other directories in the [`terraform-modules` repository](https://github.com/canonical/terraform-modules/tree/main/modules/k8s).
[/note]

Feel free to [contact us](/t/11868) if you have any question and [collaborate with us on GitHub](https://github.com/canonical/terraform-modules)!
61 changes: 0 additions & 61 deletions docs/how-to/h-develop/h-legacy-charm.md

This file was deleted.

File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Integrate `tempo-k8s` with the COS charms as follows:
```shell
juju integrate tempo-k8s:grafana-dashboard grafana:grafana-dashboard
juju integrate tempo-k8s:grafana-source grafana:grafana-source
juju integrate tempo-k8s:ingress traefik:traefik
juju integrate tempo-k8s:ingress traefik:traefik-route
juju integrate tempo-k8s:metrics-endpoint prometheus:metrics-endpoint
juju integrate tempo-k8s:logging loki:logging
```
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
[note]
**Note**: All commands are written for `juju >= v.3.0`

If you are using an earlier version, check the [Juju 3.0 Release Notes](https://juju.is/docs/juju/roadmap#heading--juju-3-0-0---22-oct-2022).
[/note]

# Minor Upgrade

> :information_source: **Example**: MySQL 8.0.33 -> MySQL 8.0.34<br/>
Expand Down Expand Up @@ -67,7 +73,7 @@ Wait for the new unit up and ready.
After the application has settled, it’s necessary to run the `pre-upgrade-check` action against the leader unit:

```shell
juju run-action mysql-k8s/leader pre-upgrade-check --wait
juju run mysql-k8s/leader pre-upgrade-check
```

The output of the action should look like:
Expand All @@ -94,8 +100,8 @@ juju refresh mysql-k8s --channel 8.0/edge
# example with channel selection and juju 3.x
juju refresh mysql-k8s --channel 8.0/edge --trust

# example with specific revision selection
juju refresh mysql-k8s --revision=89
# example with specific revision selection (do NOT miss OCI resource!)
juju refresh mysql-k8s --revision=89 --resource mysql-image=...
```

> **:information_source: IMPORTANT:** The upgrade will execute only on the highest ordinal unit, for the running example `mysql-k8s/2`, the `juju status` will look like*:
Expand Down Expand Up @@ -123,7 +129,7 @@ mysql-k8s/3 maintenance executing 10.1.148.145 upgrading unit
After the unit is upgraded, the charm will set the unit upgrade state as completed. If deemed necessary the user can further assert the success of the upgrade. Being the unit healthy within the cluster, the next step is to resume the upgrade process, by running:

```shell
juju run-action mysql-k8s/leader resume-upgrade --wait
juju run mysql-k8s/leader resume-upgrade
```

The `resume-upgrade` will rollout the upgrade for the following unit, always from highest from lowest, and for each successful upgraded unit, the process will rollout the next automatically.
Expand Down
Loading

0 comments on commit a22db5d

Please sign in to comment.