From 6de6ed3213d11dd1f658eb1f37e8be4baef01e0b Mon Sep 17 00:00:00 2001 From: Heather Young Date: Mon, 14 Oct 2019 14:35:41 +0300 Subject: [PATCH 01/23] wip: production firewall rules --- docs/4.1.yaml | 20 +++++- docs/4.1/guides/production.md | 120 ++++++++++++++++++++++++++++++++++ 2 files changed, 138 insertions(+), 2 deletions(-) create mode 100644 docs/4.1/guides/production.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index 1d5a1af6b1577..2b5f58d6ee265 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -30,8 +30,24 @@ pages: - Admin Manual: admin-guide.md - FAQ: faq.md - Guides: - - AWS: aws_oss_guide.md - - Installation: installation.md + - Quickstart: guides/quickstart.md + - Installation: guides/installation.md + - Production: guides/production.md + - Share Sessions: guides/session-sharing.md + - Replay Sessions: guides/audit-replay.md + - Manage User Permissions: guides/user-permissions.md + - Label Nodes: guides/node-labels.md + - Teleport with OpenSSH: guides/openssh.md + - Trusted Clusters: trustedclusters.md + - AWS: aws_oss.md + - Concepts: + - Teleport Basic Concepts: concepts/basics.md + - Teleport Users: concepts/users.md + - Teleport Nodes: concepts/nodes.md + - Teleport Auth: concepts/auth.md + - Teleport Proxy: concepts/proxy.md + - Architecture: concepts/architecture.md + - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md - OneLogin: ssh_one_login.md diff --git a/docs/4.1/guides/production.md b/docs/4.1/guides/production.md new file mode 100644 index 0000000000000..5875ded0a891e --- /dev/null +++ b/docs/4.1/guides/production.md @@ -0,0 +1,120 @@ +# Production Guide + +TODO Build off Quickstart, but include many more details and multi-node set up. + +Minimal Config example + +Include security considerations. Address vulns in quay? + +[TOC] + +## Prerequisites + +* Read about [Teleport Basics](../concepts/basics) +* Read through the [Installation Guide](../guides/installation) to see the available packages and binaries available. +* Read the CLI Docs for [`teleport`](../cli-docs/#teleport) + +## Designing Your Cluster + +Before installing anything there are a few things you should think about. + +* Where will you host Teleport + * On-premises + * Cloud VMs such as AWS EC2 or GCE + * An existing Kubernetes Cluster +* What does your existing network configuration look like? + * Are you able to administer the network firewall rules yourself or do you need to work with a network admin? + * Are these nodes accessible to the public Internet or behind NAT? +* Which users ([Roles or ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) on k8s) are set up on the existing system? + * Can you add new users or Roles yourself or do you need to work with a system admin? + +## Firewall Configuration + +Teleport services listen on several ports. This table shows the default port numbers. + +|Port | Service | Description | Ingress | Egress +|----------|------------|-------------|---------|---------- +| 3080 | Proxy | HTTPS port clients connect to. Used to authenticate `tsh` users and web users into the cluster. | Allow inbound connections from HTTP and SSH clients.| Allow outbound connections to HTTP and SSH clients. +| 3023 | Proxy | SSH port clients connect to after authentication. A proxy will forward this connection to port `3022` on the destination node. | Allow inbound traffic from SSH clients. | Allow outbound traffic to SSH clients. +| 3022 | Node | SSH port to the Node Service. This is Teleport's equivalent of port `22` for SSH. | Allow inbound traffic from proxy host. | Allow outbound traffic to the proxy host. +| 3025 | Auth | SSH port used by the Auth Service to serve its Auth API to other nodes in a cluster. | Allow inbound connections from all cluster nodes. | Allow outbound traffic to cluster nodes. +| 3024 | Proxy | SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server. | | + + + + +## Installation + +First + +## Running Teleport in Production + +### Systemd Unit File + +In production, we recommend starting teleport daemon via an init system like +`systemd`. If systemd and unit files are new to you check out [this helpful guide](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). Here's the recommended Teleport service unit file for systemd. + + +```yaml +[Unit] +Description=Teleport SSH Service +After=network.target + +[Service] +Type=simple +Restart=on-failure +# Set the nodes roles with the `--roles` +# In most production environments you will not +# want to run all three roles on a single host +# proxy,auth,node is the default value if none is set +ExecStart=/usr/local/bin/teleport start --roles=auth --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid +ExecReload=/bin/kill -HUP $MAINPID +PIDFile=/var/run/teleport.pid + +[Install] +WantedBy=multi-user.target +``` + +There are a couple of important things to notice about this file: + +1. The start command in the unit file specifies `--config` as a file and there are very few flags passed to the `teleport` binary. Most of the configuration for Teleport should be done in the [configuration file](../configuration). + +2. The **ExecReload** command allows admins to run `systemctl reload teleport`. This will attempt to perform a graceful restart of _*but it only works if network-based backend storage like [DynamoDB](../configuration/#storage) or [etc 3.3](../configuration/#storage) is configured*_. Graceful Restarts will fork a new process to handle new incoming requests and leave the old daemon process running until existing clients disconnect. + +You can also perform restarts/upgrades by sending `kill` signals +to a Teleport daemon manually. + +| Signal | Teleport Daemon Behavior +|-------------------------|--------------------------------------- +| `USR1` | Dumps diagnostics/debugging information into syslog. +| `TERM`, `INT` or `KILL` | Immediate non-graceful shutdown. All existing connections will be dropped. +| `USR2` | Forks a new Teleport daemon to serve new connections. +| `HUP` | Forks a new Teleport daemon to serve new connections **and** initiates the graceful shutdown of the existing process when there are no more clients connected to it. + + + +This will copy Teleport binaries to `/usr/local/bin`. + +Let's start Teleport. First, create a directory for Teleport +to keep its data. By default it's `/var/lib/teleport`. Then start `teleport` daemon: + +```bash +$ sudo teleport start +``` + +!!! danger "WARNING": + Teleport stores data in `/var/lib/teleport`. Make sure that regular/non-admin users do not + have access to this folder on the Auth server. + + +If you are logged in as `root` you may want to create a new OS-level user first. On linux create a new user called `` with the following commands: +```bash +$ adduser +$ su +``` + +Security considerations on installing tctl under root or not + +!!! danger "WARNING": + Teleport stores data in `/var/lib/teleport`. Make sure that regular/non-admin users do not + have access to this folder on the Auth server.--> From 6153a500b180837d71bc59b097fea65e945e18a4 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Mon, 14 Oct 2019 14:45:38 +0300 Subject: [PATCH 02/23] reorg admin guide --- docs/4.1/admin-guide.md | 109 ++++------------------------------------ 1 file changed, 9 insertions(+), 100 deletions(-) diff --git a/docs/4.1/admin-guide.md b/docs/4.1/admin-guide.md index 800e6153de4e1..628f6c505d806 100644 --- a/docs/4.1/admin-guide.md +++ b/docs/4.1/admin-guide.md @@ -63,99 +63,13 @@ $ curl https://get.gravitational.com/teleport-v4.0.8-darwin-amd64-bin.tar.gz.sha 0826a17b440ac20d4c38ade3d0a5eb1c62a00c4d5eb88e60b5ea627d426aaed2 teleport-v4.0.8-darwin-amd64-bin.tar.gz ``` -## Definitions - -Before diving into configuring and running Teleport, it helps to take a look at -the [Teleport Architecture](/architecture) and review the key concepts this -document will be referring to: - -|Concept | Description -|----------|------------ -|Node | Synonym to "server" or "computer", something one can "SSH to". A node must be running the [ `teleport` ](../cli-docs/#teleport) daemon with "node" role/service turned on. -|Certificate Authority (CA) | A pair of public/private keys Teleport uses to manage access. A CA can sign a public key of a user or node, establishing their cluster membership. -|Teleport Cluster | A Teleport Auth Service contains two CAs. One is used to sign user keys and the other signs node keys. A collection of nodes connected to the same CA is called a "cluster". -|Cluster Name | Every Teleport cluster must have a name. If a name is not supplied via `teleport.yaml` configuration file, a GUID will be generated.**IMPORTANT:** renaming a cluster invalidates its keys and all certificates it had created. -|Trusted Cluster | Teleport Auth Service can allow 3rd party users or nodes to connect if their public keys are signed by a trusted CA. A "trusted cluster" is a pair of public keys of the trusted CA. It can be configured via `teleport.yaml` file. - -## Teleport Daemon - -The Teleport daemon is called [ `teleport` ](./cli-docs/#teleport) and it supports -the following commands: - -|Command | Description -|------------|------------------------------------------------------- -|start | Starts the Teleport daemon. -|configure | Dumps a sample configuration file in YAML format into standard output. -|version | Shows the Teleport version. -|status | Shows the status of a Teleport connection. This command is only available from inside of an active SSH session. -|help | Shows help. - -When experimenting, you can quickly start [ `teleport` ](../cli-docs/#teleport) -with verbose logging by typing [ `teleport start -d` ](./cli-docs/#teleport-start) -. +When experimenting, you can quickly start `teleport` with verbose logging by typing `teleport start -d`. !!! danger "WARNING" Teleport stores data in `/var/lib/teleport` . Make sure that regular/non-admin users do not have access to this folder on the Auth server. -### Systemd Unit File - -In production, we recommend starting teleport daemon via an init system like -`systemd` . Here's the recommended Teleport service unit file for systemd: - -``` yaml -[Unit] -Description=Teleport SSH Service -After=network.target - -[Service] -Type=simple -Restart=on-failure -ExecStart=/usr/local/bin/teleport start --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid -ExecReload=/bin/kill -HUP $MAINPID -PIDFile=/var/run/teleport.pid - -[Install] -WantedBy=multi-user.target -``` - -### Graceful Restarts - -If using the systemd service unit file above, executing `systemctl reload -teleport` will perform a graceful restart, i.e.the Teleport daemon will fork a -new process to handle new incoming requests, leaving the old daemon process -running until existing clients disconnect. - -!!! warning "Version warning": - Graceful restarts only work if Teleport is - deployed using network-based storage like DynamoDB or etcd 3.3+. Future - versions of Teleport will not have this limitation. - -You can also perform restarts/upgrades by sending `kill` signals to a Teleport -daemon manually. - -| Signal | Teleport Daemon Behavior -|-------------------------|--------------------------------------- -| `USR1` | Dumps diagnostics/debugging information into syslog. -| `TERM` , `INT` or `KILL` | Immediate non-graceful shutdown. All existing connections will be dropped. -| `USR2` | Forks a new Teleport daemon to serve new connections. -| `HUP` | Forks a new Teleport daemon to serve new connections **and** initiates the graceful shutdown of the existing process when there are no more clients connected to it. - -### Ports - -Teleport services listen on several ports. This table shows the default port -numbers. - -|Port | Service | Description -|----------|------------|------------------------------------------- -|3022 | Node | SSH port. This is Teleport's equivalent of port `#22` for SSH. -|3023 | Proxy | SSH port clients connect to. A proxy will forward this connection to port `#3022` on the destination node. -|3024 | Proxy | SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server. -|3025 | Auth | SSH port used by the Auth Service to serve its API to other nodes in a cluster. -|3080 | Proxy | HTTPS connection to authenticate `tsh` users and web users into the cluster. The same connection is used to serve a Web UI. -|3026 | Kubernetes Proxy | HTTPS Kubernetes proxy (if enabled) - ### Filesystem Layout By default, a Teleport node has the following files present. The location of all @@ -837,9 +751,9 @@ dijkstra c9s93fd9-3333-91d3-9999-c9s93fd98f43 10.1.0.6:3022 distro ### Untrusted Auth Servers Teleport nodes use the HTTPS protocol to offer the join tokens to the auth -server running on `10.0.10.5` in the example above. In a zero-trust environment, -you must assume that an attacker can highjack the IP address of the auth server -e.g. `10.0.10.5` . +server running on `10.0.10.5` in the example above. In a zero-trust +environment, you must assume that an attacker can highjack the IP address of +the auth server e.g. `10.0.10.5`. To prevent this from happening, you need to supply every new node with an additional bit of information about the auth server. This technique is called @@ -930,9 +844,7 @@ application of arbitrary key:value pairs to each node, called labels. There are two kinds of labels: 1. `static labels` do not change over time, while - [ `teleport` ](../cli-docs/#teleport) process is running. - Examples of static labels are physical location of nodes, name of the environment (staging vs production), etc. @@ -986,8 +898,8 @@ ssh_service: must be set) which also includes shell scripts with a proper [shebang line](https://en.wikipedia.org/wiki/Shebang_(Unix)). -**Important:** notice that `command` setting is an array where the first element -is a valid executable and each subsequent element is an argument, i.e: +**Important:** notice that `command` setting is an array where the first element is +a valid executable and each subsequent element is an argument, i.e: ``` yaml # valid syntax: @@ -1785,10 +1697,9 @@ $ cat cluster_node_keys @cert-authority *.graviton-auth ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLNduBoHQaqi+kgkq3gLYjc6JIyBBnCFLgm63b5rtmWl/CJD7T9HWHxZphaS1jra6CWdboLeTp6sDUIKZ/Qw1MKFlfoqZZ8k6to43bxx7DvAHs0Te4WpuS/YRmWFhb6mMVOa8Rd4/9jE+c0f9O/t7X4m5iR7Fp7Tt+R/pjJfr03Loi6TYP/61AgXD/BkVDf+IcU4+9nknl+kaVPSGcPS9/Vbni1208Q+VN7B7Umy71gCh02gfv3rBGRgjT/cRAivuVoH/z3n5UwWg+9R3GD/l+XZKgv+pfe3OHoyDFxYKs9JaX0+GWc504y3Grhos12Lb8sNmMngxxxQ/KUDOV9z+R type=host ``` -!!! tip "Note": - When sharing the @cert-authority make sure that the URL for the - proxy is correct. In the above example, `*.graviton-auth` should be changed to - teleport.example.com. +!!! tip "Note": When sharing the @cert-authority make sure that the URL for the + proxy is correct. In the above example, `*.graviton-auth` should be changed + to teleport.example.com. On your client machine, you need to import these keys. It will allow your OpenSSH client to verify that host's certificates are signed by the trusted CA @@ -2347,9 +2258,7 @@ clients, etc), the following rules apply: upgrade to 3.4 first. * Teleport clients ( [ `tsh` ](../cli-docs/#tsh) for users and - [ `tctl` ](../cli-docs/#tctl) for admins) may not be compatible - if older than the auth or the proxy server. They will print an error if there is an incompatibility. From b28f7cf8b4d95cc49de10579fc1dc6f49ed90812 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Tue, 15 Oct 2019 14:18:12 +0300 Subject: [PATCH 03/23] saving the file would help --- docs/4.1/architecture/nodes.md | 4 ++++ docs/4.1/installation.md | 2 +- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/4.1/architecture/nodes.md b/docs/4.1/architecture/nodes.md index 9820324364179..d44028c1d9d9c 100644 --- a/docs/4.1/architecture/nodes.md +++ b/docs/4.1/architecture/nodes.md @@ -121,4 +121,8 @@ file. * [Teleport Users](./users) * [Teleport Auth](./auth) * [Teleport Proxy](./proxy) +<<<<<<< HEAD:docs/4.1/architecture/nodes.md +======= +* [Architecture](./architecture) +>>>>>>> saving the file would help:docs/4.1/concepts/nodes.md diff --git a/docs/4.1/installation.md b/docs/4.1/installation.md index b422ca21da4b0..f1c66b3d53cdd 100644 --- a/docs/4.1/installation.md +++ b/docs/4.1/installation.md @@ -165,4 +165,4 @@ $ sudo chown $USER /var/lib/teleport If the build succeeds the binaries `teleport, tsh`, and `tctl` are now in the directory `$GOPATH/src/github.com/gravitational/teleport/build` - \ No newline at end of file + From 75403caad34fb98a8f61d06534ae821ea1ee172e Mon Sep 17 00:00:00 2001 From: Heather Young Date: Wed, 16 Oct 2019 23:55:54 +0300 Subject: [PATCH 04/23] fix conflict in nodes guide --- docs/4.1/architecture/nodes.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/docs/4.1/architecture/nodes.md b/docs/4.1/architecture/nodes.md index d44028c1d9d9c..02d78035afdb0 100644 --- a/docs/4.1/architecture/nodes.md +++ b/docs/4.1/architecture/nodes.md @@ -121,8 +121,3 @@ file. * [Teleport Users](./users) * [Teleport Auth](./auth) * [Teleport Proxy](./proxy) -<<<<<<< HEAD:docs/4.1/architecture/nodes.md - -======= -* [Architecture](./architecture) ->>>>>>> saving the file would help:docs/4.1/concepts/nodes.md From e41783bfe4a364bebbbb4abd5ac56e295b8acfe8 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 16:53:42 +0300 Subject: [PATCH 05/23] move content from admin to prod guide --- docs/4.1/admin-guide.md | 100 ++------------- docs/4.1/guides/production.md | 120 ------------------ docs/4.1/production.md | 221 ++++++++++++++++++++++++++++++++++ 3 files changed, 230 insertions(+), 211 deletions(-) delete mode 100644 docs/4.1/guides/production.md create mode 100644 docs/4.1/production.md diff --git a/docs/4.1/admin-guide.md b/docs/4.1/admin-guide.md index 628f6c505d806..e793d1bd499a8 100644 --- a/docs/4.1/admin-guide.md +++ b/docs/4.1/admin-guide.md @@ -1,87 +1,5 @@ # Teleport Admin Manual -This manual covers the installation and configuration of Teleport and the -ongoing management of a Teleport cluster. It assumes that the reader has good -understanding of Linux administration. - -## Installing - -To install, download the official binaries from the [Teleport -Downloads](https://gravitational.com/teleport/download/) section on our web site -and run: - -``` -$ tar -xzf teleport-binary-release.tar.gz -$ sudo make install -``` - -### Installing from Source - -Gravitational Teleport is written in Go language. It requires Golang v1.8.3 or -newer. - -``` bash -# get the source & build: -$ mkdir -p $GOPATH/src/github.com/gravitational -$ cd $GOPATH/src/github.com/gravitational -$ git clone https://github.com/gravitational/teleport.git -$ cd teleport -$ make full - -# create the default data directory before starting: -$ sudo mkdir -p /var/lib/teleport -``` - -### Teleport Checksum - -Gravitational Teleport provides a checksum from the Downloads page. This can be -used to verify the integrity of our binary. - -![Teleport Checksum](img/teleport-sha.png) - -**Checking Checksum on Mac OS** - -``` bash -$ shasum -a 256 teleport-v4.0.8-darwin-amd64-bin.tar.gz -0826a17b440ac20d4c38ade3d0a5eb1c62a00c4d5eb88e60b5ea627d426aaed2 teleport-v4.0.8-darwin-amd64-bin.tar.gz -``` - -**Checking Checksum on Linux** - -``` bash -$ sha256sum teleport-v4.0.8-darwin-amd64-bin.tar.gz -0826a17b440ac20d4c38ade3d0a5eb1c62a00c4d5eb88e60b5ea627d426aaed2 teleport-v4.0.8-darwin-amd64-bin.tar.gz -``` - -**Checking Checksum on Automated Systems** - -If you download Teleport via an automated system, you can programmatically -obtain the checksum by adding `.sha256` to the binary. - -``` bash -$ curl https://get.gravitational.com/teleport-v4.0.8-darwin-amd64-bin.tar.gz.sha256 -0826a17b440ac20d4c38ade3d0a5eb1c62a00c4d5eb88e60b5ea627d426aaed2 teleport-v4.0.8-darwin-amd64-bin.tar.gz -``` - -When experimenting, you can quickly start `teleport` with verbose logging by typing `teleport start -d`. - -!!! danger "WARNING" - Teleport stores data in `/var/lib/teleport` . Make sure that - regular/non-admin users do not have access to this folder on the Auth - server. - -### Filesystem Layout - -By default, a Teleport node has the following files present. The location of all -of them is configurable. - -| Full path | Purpose | -|---------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `/etc/teleport.yaml` | Teleport configuration file (optional).| -| `/usr/local/bin/teleport` | Teleport daemon binary.| -| `/usr/local/bin/tctl` | Teleport admin tool. It is only needed for auth servers.| -| `/var/lib/teleport` | Teleport data directory. Nodes keep their keys and certificates there. Auth servers store the audit log and the cluster keys there, but the audit log storage can be further configured via `auth_service` section in the config file.| - ## Configuration You should use a [configuration file](#configuration-file) to configure the @@ -751,9 +669,9 @@ dijkstra c9s93fd9-3333-91d3-9999-c9s93fd98f43 10.1.0.6:3022 distro ### Untrusted Auth Servers Teleport nodes use the HTTPS protocol to offer the join tokens to the auth -server running on `10.0.10.5` in the example above. In a zero-trust -environment, you must assume that an attacker can highjack the IP address of -the auth server e.g. `10.0.10.5`. +server running on `10.0.10.5` in the example above. In a zero-trust environment, +you must assume that an attacker can highjack the IP address of the auth server +e.g. `10.0.10.5` . To prevent this from happening, you need to supply every new node with an additional bit of information about the auth server. This technique is called @@ -849,7 +767,6 @@ two kinds of labels: environment (staging vs production), etc. 2. `dynamic labels` also known as "label commands" allow to generate labels at - runtime. Teleport will execute an external command on a node at a configurable frequency and the output of a command becomes the label value. Examples include reporting load averages, presence of a process, time after @@ -898,8 +815,8 @@ ssh_service: must be set) which also includes shell scripts with a proper [shebang line](https://en.wikipedia.org/wiki/Shebang_(Unix)). -**Important:** notice that `command` setting is an array where the first element is -a valid executable and each subsequent element is an argument, i.e: +**Important:** notice that `command` setting is an array where the first element +is a valid executable and each subsequent element is an argument, i.e: ``` yaml # valid syntax: @@ -1697,9 +1614,10 @@ $ cat cluster_node_keys @cert-authority *.graviton-auth ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLNduBoHQaqi+kgkq3gLYjc6JIyBBnCFLgm63b5rtmWl/CJD7T9HWHxZphaS1jra6CWdboLeTp6sDUIKZ/Qw1MKFlfoqZZ8k6to43bxx7DvAHs0Te4WpuS/YRmWFhb6mMVOa8Rd4/9jE+c0f9O/t7X4m5iR7Fp7Tt+R/pjJfr03Loi6TYP/61AgXD/BkVDf+IcU4+9nknl+kaVPSGcPS9/Vbni1208Q+VN7B7Umy71gCh02gfv3rBGRgjT/cRAivuVoH/z3n5UwWg+9R3GD/l+XZKgv+pfe3OHoyDFxYKs9JaX0+GWc504y3Grhos12Lb8sNmMngxxxQ/KUDOV9z+R type=host ``` -!!! tip "Note": When sharing the @cert-authority make sure that the URL for the - proxy is correct. In the above example, `*.graviton-auth` should be changed - to teleport.example.com. +!!! tip "Note": + When sharing the @cert-authority make sure that the URL for the + proxy is correct. In the above example, `*.graviton-auth` should be changed to + teleport.example.com. On your client machine, you need to import these keys. It will allow your OpenSSH client to verify that host's certificates are signed by the trusted CA diff --git a/docs/4.1/guides/production.md b/docs/4.1/guides/production.md deleted file mode 100644 index 5875ded0a891e..0000000000000 --- a/docs/4.1/guides/production.md +++ /dev/null @@ -1,120 +0,0 @@ -# Production Guide - -TODO Build off Quickstart, but include many more details and multi-node set up. - -Minimal Config example - -Include security considerations. Address vulns in quay? - -[TOC] - -## Prerequisites - -* Read about [Teleport Basics](../concepts/basics) -* Read through the [Installation Guide](../guides/installation) to see the available packages and binaries available. -* Read the CLI Docs for [`teleport`](../cli-docs/#teleport) - -## Designing Your Cluster - -Before installing anything there are a few things you should think about. - -* Where will you host Teleport - * On-premises - * Cloud VMs such as AWS EC2 or GCE - * An existing Kubernetes Cluster -* What does your existing network configuration look like? - * Are you able to administer the network firewall rules yourself or do you need to work with a network admin? - * Are these nodes accessible to the public Internet or behind NAT? -* Which users ([Roles or ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) on k8s) are set up on the existing system? - * Can you add new users or Roles yourself or do you need to work with a system admin? - -## Firewall Configuration - -Teleport services listen on several ports. This table shows the default port numbers. - -|Port | Service | Description | Ingress | Egress -|----------|------------|-------------|---------|---------- -| 3080 | Proxy | HTTPS port clients connect to. Used to authenticate `tsh` users and web users into the cluster. | Allow inbound connections from HTTP and SSH clients.| Allow outbound connections to HTTP and SSH clients. -| 3023 | Proxy | SSH port clients connect to after authentication. A proxy will forward this connection to port `3022` on the destination node. | Allow inbound traffic from SSH clients. | Allow outbound traffic to SSH clients. -| 3022 | Node | SSH port to the Node Service. This is Teleport's equivalent of port `22` for SSH. | Allow inbound traffic from proxy host. | Allow outbound traffic to the proxy host. -| 3025 | Auth | SSH port used by the Auth Service to serve its Auth API to other nodes in a cluster. | Allow inbound connections from all cluster nodes. | Allow outbound traffic to cluster nodes. -| 3024 | Proxy | SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server. | | - - - - -## Installation - -First - -## Running Teleport in Production - -### Systemd Unit File - -In production, we recommend starting teleport daemon via an init system like -`systemd`. If systemd and unit files are new to you check out [this helpful guide](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). Here's the recommended Teleport service unit file for systemd. - - -```yaml -[Unit] -Description=Teleport SSH Service -After=network.target - -[Service] -Type=simple -Restart=on-failure -# Set the nodes roles with the `--roles` -# In most production environments you will not -# want to run all three roles on a single host -# proxy,auth,node is the default value if none is set -ExecStart=/usr/local/bin/teleport start --roles=auth --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid -ExecReload=/bin/kill -HUP $MAINPID -PIDFile=/var/run/teleport.pid - -[Install] -WantedBy=multi-user.target -``` - -There are a couple of important things to notice about this file: - -1. The start command in the unit file specifies `--config` as a file and there are very few flags passed to the `teleport` binary. Most of the configuration for Teleport should be done in the [configuration file](../configuration). - -2. The **ExecReload** command allows admins to run `systemctl reload teleport`. This will attempt to perform a graceful restart of _*but it only works if network-based backend storage like [DynamoDB](../configuration/#storage) or [etc 3.3](../configuration/#storage) is configured*_. Graceful Restarts will fork a new process to handle new incoming requests and leave the old daemon process running until existing clients disconnect. - -You can also perform restarts/upgrades by sending `kill` signals -to a Teleport daemon manually. - -| Signal | Teleport Daemon Behavior -|-------------------------|--------------------------------------- -| `USR1` | Dumps diagnostics/debugging information into syslog. -| `TERM`, `INT` or `KILL` | Immediate non-graceful shutdown. All existing connections will be dropped. -| `USR2` | Forks a new Teleport daemon to serve new connections. -| `HUP` | Forks a new Teleport daemon to serve new connections **and** initiates the graceful shutdown of the existing process when there are no more clients connected to it. - - - -This will copy Teleport binaries to `/usr/local/bin`. - -Let's start Teleport. First, create a directory for Teleport -to keep its data. By default it's `/var/lib/teleport`. Then start `teleport` daemon: - -```bash -$ sudo teleport start -``` - -!!! danger "WARNING": - Teleport stores data in `/var/lib/teleport`. Make sure that regular/non-admin users do not - have access to this folder on the Auth server. - - -If you are logged in as `root` you may want to create a new OS-level user first. On linux create a new user called `` with the following commands: -```bash -$ adduser -$ su -``` - -Security considerations on installing tctl under root or not - -!!! danger "WARNING": - Teleport stores data in `/var/lib/teleport`. Make sure that regular/non-admin users do not - have access to this folder on the Auth server.--> diff --git a/docs/4.1/production.md b/docs/4.1/production.md new file mode 100644 index 0000000000000..b4ba0fe8dadbd --- /dev/null +++ b/docs/4.1/production.md @@ -0,0 +1,221 @@ +# Production Guide + +Minimal Config example + +Include security considerations. Address vulns in quay? + +[TOC] + +## Prerequisites + +* Read the [Architecture Overview](../architecture/overview) +* Read through the [Installation Guide](../installation) to see the available packages and binaries available. +* Read the CLI Docs for [`teleport`](../cli-docs/#teleport) + +## Designing Your Cluster + +Before installing anything there are a few things you should think about. + +* Where will you host Teleport? + * On-premises + * Cloud VMs such as AWS EC2 or GCE + * An existing Kubernetes Cluster +* What does your existing network configuration look like? + * Are you able to administer the network firewall rules yourself or do you need to work with a network admin? + * Are these nodes accessible to the public Internet or behind NAT? +* Which users ([Roles or ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) on k8s) are set up on the existing system? + * Can you add new users or Roles yourself or do you need to work with a system admin? + +## Firewall Configuration + +Teleport services listen on several ports. This table shows the default port numbers. + +|Port | Service | Description | Ingress | Egress +|----------|------------|-------------|---------|---------- +| 3080 | Proxy | HTTPS port clients connect to. Used to authenticate `tsh` users and web users into the cluster. | Allow inbound connections from HTTP and SSH clients.| Allow outbound connections to HTTP and SSH clients. +| 3023 | Proxy | SSH port clients connect to after authentication. A proxy will forward this connection to port `3022` on the destination node. | Allow inbound traffic from SSH clients. | Allow outbound traffic to SSH clients. +| 3022 | Node | SSH port to the Node Service. This is Teleport's equivalent of port `22` for SSH. | Allow inbound traffic from proxy host. | Allow outbound traffic to the proxy host. +| 3025 | Auth | SSH port used by the Auth Service to serve its Auth API to other nodes in a cluster. | Allow inbound connections from all cluster nodes. | Allow outbound traffic to cluster nodes. +| 3024 | Proxy | SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server. | | + + + +## Installation + +We have a detailed [installation guide](../installation) which shows how to +install all available binaries or [install from +source](#../installation/#installing-from-source). Reference that guide to learn +the best way to install Teleport for your system and the come back here to +finish your production install. + +### Filesystem Layout + +By default a Teleport node has the following files present. The location of all +of them is configurable. + +| Default path | Purpose | +|------------------------------|----------| +| `/etc/teleport.yaml` | Teleport configuration file.| +| `/usr/local/bin/teleport` | Teleport daemon binary.| +| `/usr/local/bin/tctl` | Teleport admin tool. It is only needed for auth servers.| +| `/usr/local/bin/tsh` | Teleport CLI client tool. It is needed on any node that needs to connect to the cluster.| +| `/var/lib/teleport` | Teleport data directory. Nodes keep their keys and certificates there. Auth servers store the audit log and the cluster keys there, but the audit log storage can be further configured via `auth_service` section in the config file.| + +## Running Teleport in Production + +### Systemd Unit File + +In production, we recommend starting teleport daemon via an init system like +`systemd`. If systemd and unit files are new to you check out [this helpful guide](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). Here's the recommended Teleport service unit file for systemd. + + +```yaml +[Unit] +Description=Teleport SSH Service +After=network.target + +[Service] +Type=simple +Restart=on-failure +# Set the nodes roles with the `--roles` +# In most production environments you will not +# want to run all three roles on a single host +# --roles='proxy,auth,node' is the default value +# if none is set +ExecStart=/usr/local/bin/teleport start --roles=auth --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid +ExecReload=/bin/kill -HUP $MAINPID +PIDFile=/var/run/teleport.pid + +[Install] +WantedBy=multi-user.target +``` + +There are a couple of important things to notice about this file: + +1. The start command in the unit file specifies `--config` as a file and there + are very few flags passed to the `teleport` binary. Most of the configuration + for Teleport should be done in the [configuration file](../configuration). + +2. The **ExecReload** command allows admins to run `systemctl reload teleport`. + This will attempt to perform a graceful restart of _*but it only works if + network-based backend storage like [DynamoDB](../configuration/#storage) or + [etc 3.3](../configuration/#storage) is configured*_. Graceful Restarts will + fork a new process to handle new incoming requests and leave the old daemon + process running until existing clients disconnect. + +### Start the Teleport Service + +You can start Teleport as a Systemd Unit by enabling the `.service` file +with the `systemctl` tool. + +```bash +$ cd /etc/systemd/system +# Use your text editor of choice to create the .service file +# Here we use vim +$ vi teleport.service +# use the file above as is, or customize as needed +# save the file +$ systemctl enable teleport +$ systemctl start teleport +# show the status of the unit +$ systecmtl status teleport +# follow tail of service logs +$ journalctl -fu teleport +# If you modify teleport.service later you will need to +# reload the systemctl daemon and reload teleport +# to apply your changes +$ systemctl daemon-reload +$ systemctl reload teleport +``` + +You can also perform restarts or upgrades by sending `kill` signals +to a Teleport daemon manually. + +| Signal | Teleport Daemon Behavior +|-------------------------|--------------------------------------- +| `USR1` | Dumps diagnostics/debugging information into syslog. +| `TERM`, `INT` or `KILL` | Immediate non-graceful shutdown. All existing connections will be dropped. +| `USR2` | Forks a new Teleport daemon to serve new connections. +| `HUP` | Forks a new Teleport daemon to serve new connections **and** initiates the graceful shutdown of the existing process when there are no more clients connected to it. This is the signal sent to trigger a graceful restart. + +### Adding Nodes to the Cluster + +We've written a dedicated guide on [Adding Nodes to your +Cluster](./guides/node-join) which shows how to generate or set join tokens and +use them to add nodes. + +## Security Considerations + +### CA Pinning + +Teleport nodes use the HTTPS protocol to offer the join tokens to the auth +server. In a zero-trust environment, you must assume that an attacker can +highjack the IP address of the auth server. + +To prevent this from happening, you need to supply every new node with an +additional bit of information about the auth server. This technique is called +"CA Pinning". It works by asking the auth server to produce a "CA Pin", which +is a hashed value of it's private key, i.e. it cannot be forged by an attacker. + +To get the current CA Pin run this on the auth server: + +```bash +$ tctl status +Cluster staging.example.com +User CA never updated +Host CA never updated +CA pin sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 +``` + +The CA pin at the bottom needs to be passed to the new nodes when they're starting +for the first time, i.e. when they join a cluster: + +Via CLI: + +```bash +$ teleport start \ + --roles=node \ + --token=1ac590d36493acdaa2387bc1c492db1a \ + --ca-pin=sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 \ + --auth-server=10.12.0.6:3025 +``` + +or via `/etc/teleport.yaml` on a node: + +```yaml +teleport: + auth_token: "1ac590d36493acdaa2387bc1c492db1a" + ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" + auth_servers: + - "10.12.0.6:3025" +``` + +!!! warning "Warning": + If a CA pin not provided, Teleport node will join a cluster but it will + print a `WARN` message (warning) into it's standard error output. + +!!! warning "Warning": + The CA pin becomes invalid if a Teleport administrator performs the CA + rotation by executing `tctl auth rotate`. + +### Secure Data Storage + +By default the `teleport` daemon uses the local +directory `/var/lib/teleport` to store its data. This applies to any role or +service including Auth, Node, or Proxy. While an Auth node hosts the most +sensitive data you will want to prevent unauthorized access to this directory. +Make sure that regular/non-admin users do not have access to this folder, +particularly on the Auth server. Change the ownership of the directory with +[`chown`](https://linuxize.com/post/linux-chown-command/) + +```bash +$ sudo teleport start +``` + +If you are logged in as `root` you may want to create a new OS-level user first. On linux create a new user called `` with the following commands: +```bash +$ adduser +$ su +``` + +Security considerations on installing tctl under root or not From dea3ed094f7c536b09bbb01df097d1b0d86335b2 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:16:04 +0300 Subject: [PATCH 06/23] add wip configuration page, rm from admin guide --- docs/4.1.yaml | 23 +- docs/4.1/admin-guide.md | 474 -------------------------------------- docs/4.1/configuration.md | 445 +++++++++++++++++++++++++++++++++++ 3 files changed, 451 insertions(+), 491 deletions(-) create mode 100644 docs/4.1/configuration.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index 2b5f58d6ee265..bf37eff40b283 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -26,27 +26,16 @@ pages: - Documentation: - Introduction: intro.md - Quick Start Guide: quickstart.md + - Installation: installation.md + - Production: production.md + - CLI Docs: cli-docs.md + - YAML Configuration: configuration.md - User Manual: user-manual.md - Admin Manual: admin-guide.md - FAQ: faq.md - Guides: - - Quickstart: guides/quickstart.md - - Installation: guides/installation.md - - Production: guides/production.md - - Share Sessions: guides/session-sharing.md - - Replay Sessions: guides/audit-replay.md - - Manage User Permissions: guides/user-permissions.md - - Label Nodes: guides/node-labels.md - - Teleport with OpenSSH: guides/openssh.md - - Trusted Clusters: trustedclusters.md - - AWS: aws_oss.md - - Concepts: - - Teleport Basic Concepts: concepts/basics.md - - Teleport Users: concepts/users.md - - Teleport Nodes: concepts/nodes.md - - Teleport Auth: concepts/auth.md - - Teleport Proxy: concepts/proxy.md - - Architecture: concepts/architecture.md + # TODO: Add How-To Guide on Managing Nodes, Users, Trusted Clusters + # etc. any common task should have a guide - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md diff --git a/docs/4.1/admin-guide.md b/docs/4.1/admin-guide.md index e793d1bd499a8..6a24c7a1b5d36 100644 --- a/docs/4.1/admin-guide.md +++ b/docs/4.1/admin-guide.md @@ -1,479 +1,5 @@ # Teleport Admin Manual -## Configuration - -You should use a [configuration file](#configuration-file) to configure the -[ `teleport` ](../cli-docs/#teleport) daemon. For simple experimentation, you can -use command line flags with the [ `teleport start` ](./cli-docs/#teleport-start) -command. Read about all the allowed flags in the [CLI -Docs](./cli-docs/#teleport-start) or run `teleport start --help` - -### Configuration File - -Teleport uses the YAML file format for configuration. A sample configuration -file is shown below. By default, it is stored in `/etc/teleport.yaml` - -!!! note "IMPORTANT": - When editing YAML configuration, please pay attention to how your - editor handles white space. YAML requires consistent handling of - tab characters. - -``` yaml -# By default, this file should be stored in /etc/teleport.yaml - -# This section of the configuration file applies to all teleport -# services. -teleport: - # nodename allows to assign an alternative name this node can be reached by. - # by default it's equal to hostname - nodename: graviton - - # Data directory where Teleport daemon keeps its data. - # See "Filesystem Layout" section above for more details. - data_dir: /var/lib/teleport - - # Invitation token used to join a cluster. it is not used on - # subsequent starts - auth_token: xxxx-token-xxxx - - # Optional CA pin of the auth server. This enables more secure way of adding new - # nodes to a cluster. See "Adding Nodes" section above. - ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" - - # When running in multi-homed or NATed environments Teleport nodes need - # to know which IP it will be reachable at by other nodes - # - # This value can be specified as FQDN e.g. host.example.com - advertise_ip: 10.1.0.5 - - # list of auth servers in a cluster. you will have more than one auth server - # if you configure teleport auth to run in HA configuration. - # If adding a node located behind NAT, use the Proxy URL. e.g. - # auth_servers: - # - teleport-proxy.example.com:3080 - auth_servers: - - - 10.1.0.5:3025 - - 10.1.0.6:3025 - - # Teleport throttles all connections to avoid abuse. These settings allow - # you to adjust the default limits - connection_limits: - max_connections: 1000 - max_users: 250 - - # Logging configuration. Possible output values are 'stdout', 'stderr' and - # 'syslog'. Possible severity values are INFO, WARN and ERROR (default). - log: - output: stderr - severity: ERROR - - # Configuration for the storage back-end used for the cluster state and the - # audit log. Several back-end types are supported. See "High Availability" - # section of this Admin Manual below to learn how to configure DynamoDB, - # S3, etcd and other highly available back-ends. - storage: - # By default teleport uses the `data_dir` directory on a local filesystem - type: dir - - # Array of locations where the audit log events will be stored. by - # default they are stored in `/var/lib/teleport/log` - audit_events_uri: ['file:///var/lib/teleport/log', 'dynamodb://events_table_name', 'stdout://'] - - # Use this setting to configure teleport to store the recorded sessions in - # an AWS S3 bucket. see "Using Amazon S3" chapter for more information. - audit_sessions_uri: 's3://example.com/path/to/bucket?region=us-east-1' - - # Cipher algorithms that the server supports. This section only needs to be - # set if you want to override the defaults. - ciphers: - - - aes128-ctr - - aes192-ctr - - aes256-ctr - - aes128-gcm@openssh.com - - chacha20-poly1305@openssh.com - - # Key exchange algorithms that the server supports. This section only needs - # to be set if you want to override the defaults. - kex_algos: - - - curve25519-sha256@libssh.org - - ecdh-sha2-nistp256 - - ecdh-sha2-nistp384 - - ecdh-sha2-nistp521 - - # Message authentication code (MAC) algorithms that the server supports. - # This section only needs to be set if you want to override the defaults. - mac_algos: - - - hmac-sha2-256-etm@openssh.com - - hmac-sha2-256 - - # List of the supported ciphersuites. If this section is not specified, - # only the default ciphersuites are enabled. - ciphersuites: - - tls-rsa-with-aes-128-gcm-sha256 - - tls-rsa-with-aes-256-gcm-sha384 - - tls-ecdhe-rsa-with-aes-128-gcm-sha256 - - tls-ecdhe-ecdsa-with-aes-128-gcm-sha256 - - tls-ecdhe-rsa-with-aes-256-gcm-sha384 - - tls-ecdhe-ecdsa-with-aes-256-gcm-sha384 - - tls-ecdhe-rsa-with-chacha20-poly1305 - - tls-ecdhe-ecdsa-with-chacha20-poly1305 - -# This section configures the 'auth service': -auth_service: - # Turns 'auth' role on. Default is 'yes' - enabled: yes - - # A cluster name is used as part of a signature in certificates - # generated by this CA. - # - # We strongly recommend to explicitly set it to something meaningful as it - # becomes important when configuring trust between multiple clusters. - # - # By default an automatically generated name is used (not recommended) - # - # IMPORTANT: if you change cluster_name, it will invalidate all generated - # certificates and keys (may need to wipe out /var/lib/teleport directory) - cluster_name: "main" - - authentication: - # default authentication type. possible values are 'local', 'oidc' and 'saml' - # only local authentication (Teleport's own user DB) is supported in the open - # source version - type: local - # second_factor can be off, otp, or u2f - second_factor: otp - # this section is used if second_factor is set to 'u2f' - u2f: - # app_id must point to the URL of the Teleport Web UI (proxy) accessible - # by the end users - app_id: https://localhost:3080 - # facets must list all proxy servers if there are more than one deployed - facets: - - - https://localhost:3080 - - # IP and the port to bind to. Other Teleport nodes will be connecting to - # this port (AKA "Auth API" or "Cluster API") to validate client - # certificates - listen_addr: 0.0.0.0:3025 - - # The optional DNS name the auth server if located behind a load balancer. - # (see public_addr section below) - public_addr: auth.example.com:3025 - - # Pre-defined tokens for adding new nodes to a cluster. Each token specifies - # the role a new node will be allowed to assume. The more secure way to - # add nodes is to use `ttl node add --ttl` command to generate auto-expiring - # tokens. - # - # We recommend to use tools like `pwgen` to generate sufficiently random - # tokens of 32+ byte length. - tokens: - - - "proxy,node:xxxxx" - - "auth:yyyy" - - # Optional setting for configuring session recording. Possible values are: - # "node" : sessions will be recorded on the node level (the default) - # "proxy" : recording on the proxy level, see "recording proxy mode" section. - # "off" : session recording is turned off - session_recording: "node" - - # This setting determines if a Teleport proxy performs strict host key checks. - # Only applicable if session_recording=proxy, see "recording proxy mode" for details. - proxy_checks_host_keys: yes - - # Determines if SSH sessions to cluster nodes are forcefully terminated - # after no activity from a client (idle client). - # Examples: "30m", "1h" or "1h30m" - client_idle_timeout: never - - # Determines if the clients will be forcefully disconnected when their - # certificates expire in the middle of an active SSH session. (default is 'no') - disconnect_expired_cert: no - - # Determines the interval at which Teleport will send keep-alive messages. The - # default value mirrors sshd at 15 minutes. keep_alive_count_max is the number - # of missed keep-alive messages before the server tears down the connection to the - # client. - keep_alive_interval: 15 - keep_alive_count_max: 3 - - # License file to start auth server with. Note that this setting is ignored - # in open-source Teleport and is required only for Teleport Pro, Business - # and Enterprise subscription plans. - # - # The path can be either absolute or relative to the configured `data_dir` - # and should point to the license file obtained from Teleport Download Portal. - # - # If not set, by default Teleport will look for the `license.pem` file in - # the configured `data_dir` . - license_file: /var/lib/teleport/license.pem - - # DEPRECATED in Teleport 3.2 (moved to proxy_service section) - kubeconfig_file: /path/to/kubeconfig - -# This section configures the 'node service': -ssh_service: - # Turns 'ssh' role on. Default is 'yes' - enabled: yes - - # IP and the port for SSH service to bind to. - listen_addr: 0.0.0.0:3022 - - # The optional public address the SSH service. This is useful if administrators - # want to allow users to connect to nodes directly, bypassing a Teleport proxy - # (see public_addr section below) - public_addr: node.example.com:3022 - - # See explanation of labels in "Labeling Nodes" section below - labels: - role: master - type: postgres - - # List of the commands to periodically execute. Their output will be used as node labels. - # See "Labeling Nodes" section below for more information and more examples. - commands: - # this command will add a label 'arch=x86_64' to a node - - - name: arch - - command: ['/bin/uname', '-p'] - period: 1h0m0s - - # enables reading ~/.tsh/environment before creating a session. by default - # set to false, can be set true here or as a command line flag. - permit_user_env: false - - # configures PAM integration. see below for more details. - pam: - enabled: no - service_name: teleport - -# This section configures the 'proxy service' -proxy_service: - # Turns 'proxy' role on. Default is 'yes' - enabled: yes - - # SSH forwarding/proxy address. Command line (CLI) clients always begin their - # SSH sessions by connecting to this port - listen_addr: 0.0.0.0:3023 - - # Reverse tunnel listening address. An auth server (CA) can establish an - # outbound (from behind the firewall) connection to this address. - # This will allow users of the outside CA to connect to behind-the-firewall - # nodes. - tunnel_listen_addr: 0.0.0.0:3024 - - # The HTTPS listen address to serve the Web UI and also to authenticate the - # command line (CLI) users via password+HOTP - web_listen_addr: 0.0.0.0:3080 - - # The DNS name the proxy HTTPS endpoint as accessible by cluster users. - # Defaults to the proxy's hostname if not specified. If running multiple - # proxies behind a load balancer, this name must point to the load balancer - # (see public_addr section below) - public_addr: proxy.example.com:3080 - - # The DNS name of the proxy SSH endpoint as accessible by cluster clients. - # Defaults to the proxy's hostname if not specified. If running multiple proxies - # behind a load balancer, this name must point to the load balancer. - # Use a TCP load balancer because this port uses SSH protocol. - ssh_public_addr: proxy.example.com:3023 - - # TLS certificate for the HTTPS connection. Configuring these properly is - # critical for Teleport security. - https_key_file: /var/lib/teleport/webproxy_key.pem - https_cert_file: /var/lib/teleport/webproxy_cert.pem - - # This section configures the Kubernetes proxy service - kubernetes: - # Turns 'kubernetes' proxy on. Default is 'no' - enabled: yes - - # Kubernetes proxy listen address. - listen_addr: 0.0.0.0:3026 - - # The DNS name of the Kubernetes proxy server that is accessible by cluster clients. - # If running multiple proxies behind a load balancer, this name must point to the - # load balancer. - public_addr: ['kube.example.com:3026'] - - # This setting is not required if the Teleport proxy service is - # deployed inside a Kubernetes cluster. Otherwise, Teleport proxy - # will use the credentials from this file: - kubeconfig_file: /path/to/kube/config -``` - -#### Public Addr - -Notice that all three Teleport services (proxy, auth, node) have an optional -`public_addr` property. The public address can take an IP or a DNS name. It can -also be a list of values: - -``` yaml -public_addr: ["proxy-one.example.com", "proxy-two.example.com"] -``` - -Specifying a public address for a Teleport service may be useful in the -following use cases: - -* You have multiple identical services, like proxies, behind a load balancer. -* You want Teleport to issue SSH certificate for the service with the additional - - principals, e.g.host names. - -## Authentication - -Teleport uses the concept of "authentication connectors" to authenticate users -when they execute [ `tsh login` ](../cli-docs/#tsh-login) command. There are three -types of authentication connectors: - -### Local Connector - -Local authentication is used to authenticate against a local Teleport user -database. This database is managed by [ `tctl users` ](./cli-docs/#tctl-users) -command. Teleport also supports second factor authentication (2FA) for the local -connector. There are three possible values (types) of 2FA: - - + `otp` is the default. It implements - - [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) - standard. You can use [Google - Authenticator](https://en.wikipedia.org/wiki/Google_Authenticator) or - [Authy](https://www.authy.com/) or any other TOTP client. - - + `u2f` implements [U2F](https://en.wikipedia.org/wiki/Universal_2nd_Factor) - - standard for utilizing hardware (USB) keys for second factor. - - + `off` turns off second factor authentication. - -Here is an example of this setting in the `teleport.yaml` : - -``` yaml -auth_service: - authentication: - type: local - second_factor: off -``` - -### Github OAuth 2.0 Connector - -This connector implements Github OAuth 2.0 authentication flow. Please refer to -Github documentation on [Creating an OAuth -App](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) -to learn how to create and register an OAuth app. - -Here is an example of this setting in the `teleport.yaml` : - -``` yaml -auth_service: - authentication: - type: github -``` - -See [Github OAuth 2.0](#github-oauth-20) for details on how to configure it. - -### SAML - -This connector type implements SAML authentication. It can be configured against -any external identity manager like Okta or Auth0. This feature is only available -for Teleport Enterprise. - -Here is an example of this setting in the `teleport.yaml` : - -``` yaml -auth_service: - authentication: - type: saml -``` - -### OIDC - -Teleport implements OpenID Connect (OIDC) authentication, which is similar to -SAML in principle. This feature is only available for Teleport Enterprise. - -Here is an example of this setting in the `teleport.yaml` : - -``` yaml -auth_service: - authentication: - type: oidc -``` - -### FIDO U2F - -Teleport supports [FIDO U2F](https://www.yubico.com/about/background/fido/) -hardware keys as a second authentication factor. By default U2F is disabled. To -start using U2F: - -* Enable U2F in Teleport configuration `/etc/teleport.yaml` . -* For CLI-based logins you have to install - - [u2f-host](https://developers.yubico.com/libu2f-host/) utility. - -* For web-based logins you have to use Google Chrome, as it is the only browser - - supporting U2F at this time. - -``` yaml -# snippet from /etc/teleport.yaml to show an example configuration of U2F: -auth_service: - authentication: - type: local - second_factor: u2f - # this section is needed only if second_factor is set to 'u2f' - u2f: - # app_id must point to the URL of the Teleport Web UI (proxy) accessible - # by the end users - app_id: https://localhost:3080 - # facets must list all proxy servers if there are more than one deployed - facets: - - https://localhost:3080 -``` - -For single-proxy setups, the `app_id` setting can be equal to the domain name of -the proxy, but this will prevent you from adding more proxies without changing -the `app_id` . For multi-proxy setups, the `app_id` should be an HTTPS URL -pointing to a JSON file that mirrors `facets` in the auth config. - -!!! warning "Warning": - The `app_id` must never change in the lifetime of the - cluster. If the App ID changes, all existing U2F key registrations will - become invalid and all users who use U2F as the second factor will need to - re-register. When adding a new proxy server, make sure to add it to the list - of "facets" in the configuration file, but also to the JSON file referenced - by `app_id` - -**Logging in with U2F** - -For logging in via the CLI, you must first install -[u2f-host](https://developers.yubico.com/libu2f-host/). Installing: - -``` yaml -# OSX: -$ brew install libu2f-host - -# Ubuntu 16.04 LTS: -$ apt-get install u2f-host -``` - -Then invoke `tsh ssh` as usual to authenticate: - -``` -tsh --proxy ssh -``` - -!!! tip "Version Warning": External user identities are only supported in - [Teleport Enterprise](/enterprise/). Please reach out to - -`sales@gravitational.com` for more information. - ## Adding and Deleting Users This section covers internal user identities, i.e.user accounts created and diff --git a/docs/4.1/configuration.md b/docs/4.1/configuration.md new file mode 100644 index 0000000000000..cff9603217cb4 --- /dev/null +++ b/docs/4.1/configuration.md @@ -0,0 +1,445 @@ +# YAML Configuration + +### Configuration File + +Teleport uses the YAML file format for configuration. A sample configuration file is shown below. By default, it is stored in `/etc/teleport.yaml` + +!!! note "IMPORTANT": + When editing YAML configuration, please pay attention to how your editor + handles white space. YAML requires consistent handling of tab characters. + +```yaml +# By default, this file should be stored in /etc/teleport.yaml + +# This section of the configuration file applies to all teleport +# services. +teleport: + # nodename allows to assign an alternative name this node can be reached by. + # by default it's equal to hostname + nodename: graviton + + # Data directory where Teleport daemon keeps its data. + # See "Filesystem Layout" section above for more details. + data_dir: /var/lib/teleport + + # Invitation token used to join a cluster. it is not used on + # subsequent starts + auth_token: xxxx-token-xxxx + + # Optional CA pin of the auth server. This enables more secure way of adding new + # nodes to a cluster. See "Adding Nodes" section above. + ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" + + # When running in multi-homed or NATed environments Teleport nodes need + # to know which IP it will be reachable at by other nodes + # + # This value can be specified as FQDN e.g. host.example.com + advertise_ip: 10.1.0.5 + + # list of auth servers in a cluster. you will have more than one auth server + # if you configure teleport auth to run in HA configuration. + # If adding a node located behind NAT, use the Proxy URL. e.g. + # auth_servers: + # - teleport-proxy.example.com:3080 + auth_servers: + - 10.1.0.5:3025 + - 10.1.0.6:3025 + + # Teleport throttles all connections to avoid abuse. These settings allow + # you to adjust the default limits + connection_limits: + max_connections: 1000 + max_users: 250 + + # Logging configuration. Possible output values are 'stdout', 'stderr' and + # 'syslog'. Possible severity values are INFO, WARN and ERROR (default). + log: + output: stderr + severity: ERROR + + # Configuration for the storage back-end used for the cluster state and the + # audit log. Several back-end types are supported. See "High Availability" + # section of this Admin Manual below to learn how to configure DynamoDB, + # S3, etcd and other highly available back-ends. + storage: + # By default teleport uses the `data_dir` directory on a local filesystem + type: dir + + # Array of locations where the audit log events will be stored. by + # default they are stored in `/var/lib/teleport/log` + audit_events_uri: ['file:///var/lib/teleport/log', 'dynamodb://events_table_name', 'stdout://'] + + # Use this setting to configure teleport to store the recorded sessions in + # an AWS S3 bucket. see "Using Amazon S3" chapter for more information. + audit_sessions_uri: 's3://example.com/path/to/bucket?region=us-east-1' + + # Cipher algorithms that the server supports. This section only needs to be + # set if you want to override the defaults. + ciphers: + - aes128-ctr + - aes192-ctr + - aes256-ctr + - aes128-gcm@openssh.com + - chacha20-poly1305@openssh.com + + # Key exchange algorithms that the server supports. This section only needs + # to be set if you want to override the defaults. + kex_algos: + - curve25519-sha256@libssh.org + - ecdh-sha2-nistp256 + - ecdh-sha2-nistp384 + - ecdh-sha2-nistp521 + + # Message authentication code (MAC) algorithms that the server supports. + # This section only needs to be set if you want to override the defaults. + mac_algos: + - hmac-sha2-256-etm@openssh.com + - hmac-sha2-256 + + # List of the supported ciphersuites. If this section is not specified, + # only the default ciphersuites are enabled. + ciphersuites: + - tls-rsa-with-aes-128-gcm-sha256 + - tls-rsa-with-aes-256-gcm-sha384 + - tls-ecdhe-rsa-with-aes-128-gcm-sha256 + - tls-ecdhe-ecdsa-with-aes-128-gcm-sha256 + - tls-ecdhe-rsa-with-aes-256-gcm-sha384 + - tls-ecdhe-ecdsa-with-aes-256-gcm-sha384 + - tls-ecdhe-rsa-with-chacha20-poly1305 + - tls-ecdhe-ecdsa-with-chacha20-poly1305 + + +# This section configures the 'auth service': +auth_service: + # Turns 'auth' role on. Default is 'yes' + enabled: yes + + # A cluster name is used as part of a signature in certificates + # generated by this CA. + # + # We strongly recommend to explicitly set it to something meaningful as it + # becomes important when configuring trust between multiple clusters. + # + # By default an automatically generated name is used (not recommended) + # Every Teleport cluster must have a name. If a name is not supplied via + # `teleport.yaml` configuration file, a GUID will be generated. + # **IMPORTANT:** renaming a cluster invalidates its keys and all + # certificates it had created. + # + # IMPORTANT: if you change cluster_name, it will invalidate all generated + # certificates and keys (may need to wipe out /var/lib/teleport directory) + cluster_name: "main" + + authentication: + # default authentication type. possible values are 'local', 'oidc' and 'saml' + # only local authentication (Teleport's own user DB) is supported in the open + # source version + type: local + # second_factor can be off, otp, or u2f + second_factor: otp + # this section is used if second_factor is set to 'u2f' + u2f: + # app_id must point to the URL of the Teleport Web UI (proxy) accessible + # by the end users + app_id: https://localhost:3080 + # facets must list all proxy servers if there are more than one deployed + facets: + - https://localhost:3080 + + # IP and the port to bind to. Other Teleport nodes will be connecting to + # this port (AKA "Auth API" or "Cluster API") to validate client + # certificates + listen_addr: 0.0.0.0:3025 + + # The optional DNS name the auth server if located behind a load balancer. + # (see public_addr section below) + public_addr: auth.example.com:3025 + + # Pre-defined tokens for adding new nodes to a cluster. Each token specifies + # the role a new node will be allowed to assume. The more secure way to + # add nodes is to use `ttl node add --ttl` command to generate auto-expiring + # tokens. + # + # We recommend to use tools like `pwgen` to generate sufficiently random + # tokens of 32+ byte length. + tokens: + - "proxy,node:xxxxx" + - "auth:yyyy" + + # Optional setting for configuring session recording. Possible values are: + # "node" : sessions will be recorded on the node level (the default) + # "proxy" : recording on the proxy level, see "recording proxy mode" section. + # "off" : session recording is turned off + session_recording: "node" + + # This setting determines if a Teleport proxy performs strict host key checks. + # Only applicable if session_recording=proxy, see "recording proxy mode" for details. + proxy_checks_host_keys: yes + + # Determines if SSH sessions to cluster nodes are forcefully terminated + # after no activity from a client (idle client). + # Examples: "30m", "1h" or "1h30m" + client_idle_timeout: never + + # Determines if the clients will be forcefully disconnected when their + # certificates expire in the middle of an active SSH session. (default is 'no') + disconnect_expired_cert: no + + # Determines the interval at which Teleport will send keep-alive messages. The + # default value mirrors sshd at 15 minutes. keep_alive_count_max is the number + # of missed keep-alive messages before the server tears down the connection to the + # client. + keep_alive_interval: 15 + keep_alive_count_max: 3 + + # License file to start auth server with. Note that this setting is ignored + # in open-source Teleport and is required only for Teleport Pro, Business + # and Enterprise subscription plans. + # + # The path can be either absolute or relative to the configured `data_dir` + # and should point to the license file obtained from Teleport Download Portal. + # + # If not set, by default Teleport will look for the `license.pem` file in + # the configured `data_dir`. + license_file: /var/lib/teleport/license.pem + + # DEPRECATED in Teleport 3.2 (moved to proxy_service section) + kubeconfig_file: /path/to/kubeconfig + +# This section configures the 'node service': +ssh_service: + # Turns 'ssh' role on. Default is 'yes' + enabled: yes + + # IP and the port for SSH service to bind to. + listen_addr: 0.0.0.0:3022 + + # The optional public address the SSH service. This is useful if administrators + # want to allow users to connect to nodes directly, bypassing a Teleport proxy + # (see public_addr section below) + public_addr: node.example.com:3022 + + # See explanation of labels in "Labeling Nodes" section below + labels: + role: master + type: postgres + + # List of the commands to periodically execute. Their output will be used as node labels. + # See "Labeling Nodes" section below for more information and more examples. + commands: + # this command will add a label 'arch=x86_64' to a node + - name: arch + command: ['/bin/uname', '-p'] + period: 1h0m0s + + # enables reading ~/.tsh/environment before creating a session. by default + # set to false, can be set true here or as a command line flag. + permit_user_env: false + + # configures PAM integration. see below for more details. + pam: + enabled: no + service_name: teleport + +# This section configures the 'proxy service' +proxy_service: + # Turns 'proxy' role on. Default is 'yes' + enabled: yes + + # SSH forwarding/proxy address. Command line (CLI) clients always begin their + # SSH sessions by connecting to this port + listen_addr: 0.0.0.0:3023 + + # Reverse tunnel listening address. An auth server (CA) can establish an + # outbound (from behind the firewall) connection to this address. + # This will allow users of the outside CA to connect to behind-the-firewall + # nodes. + tunnel_listen_addr: 0.0.0.0:3024 + + # The HTTPS listen address to serve the Web UI and also to authenticate the + # command line (CLI) users via password+HOTP + web_listen_addr: 0.0.0.0:3080 + + # The DNS name the proxy HTTPS endpoint as accessible by cluster users. + # Defaults to the proxy's hostname if not specified. If running multiple + # proxies behind a load balancer, this name must point to the load balancer + # (see public_addr section below) + public_addr: proxy.example.com:3080 + + # The DNS name of the proxy SSH endpoint as accessible by cluster clients. + # Defaults to the proxy's hostname if not specified. If running multiple proxies + # behind a load balancer, this name must point to the load balancer. + # Use a TCP load balancer because this port uses SSH protocol. + ssh_public_addr: proxy.example.com:3023 + + # TLS certificate for the HTTPS connection. Configuring these properly is + # critical for Teleport security. + https_key_file: /var/lib/teleport/webproxy_key.pem + https_cert_file: /var/lib/teleport/webproxy_cert.pem + + # This section configures the Kubernetes proxy service + kubernetes: + # Turns 'kubernetes' proxy on. Default is 'no' + enabled: yes + + # Kubernetes proxy listen address. + listen_addr: 0.0.0.0:3026 + + # The DNS name of the Kubernetes proxy server that is accessible by cluster clients. + # If running multiple proxies behind a load balancer, this name must point to the + # load balancer. + public_addr: ['kube.example.com:3026'] + + # This setting is not required if the Teleport proxy service is + # deployed inside a Kubernetes cluster. Otherwise, Teleport proxy + # will use the credentials from this file: + kubeconfig_file: /path/to/kube/config +``` + +#### Public Addr + +Notice that all three Teleport services (proxy, auth, node) have an optional +`public_addr` property. The public address can take an IP or a DNS name. +It can also be a list of values: + +```yaml +public_addr: ["proxy-one.example.com", "proxy-two.example.com"] +``` + +Specifying a public address for a Teleport service may be useful in the following use cases: + +* You have multiple identical services, like proxies, behind a load balancer. +* You want Teleport to issue SSH certificate for the service with the + additional principals, e.g. host names. + +## Authentication + +Teleport uses the concept of "authentication connectors" to authenticate users when +they execute `tsh login` command. There are three types of authentication connectors: + +### Local Connector + +Local authentication is used to authenticate against a local Teleport user database. This database +is managed by `tctl users` command. Teleport also supports second factor authentication +(2FA) for the local connector. There are three possible values (types) of 2FA: + + * `otp` is the default. It implements [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) + standard. You can use [Google Authenticator](https://en.wikipedia.org/wiki/Google_Authenticator) or + [Authy](https://www.authy.com/) or any other TOTP client. + * `u2f` implements [U2F](https://en.wikipedia.org/wiki/Universal_2nd_Factor) standard for utilizing hardware (USB) + keys for second factor. + * `off` turns off second factor authentication. + +Here is an example of this setting in the `teleport.yaml`: + +```yaml +auth_service: + authentication: + type: local + second_factor: off +``` + +### Github OAuth 2.0 Connector + +This connector implements Github OAuth 2.0 authentication flow. Please refer +to Github documentation on [Creating an OAuth App](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) +to learn how to create and register an OAuth app. + +Here is an example of this setting in the `teleport.yaml`: + +```yaml +auth_service: + authentication: + type: github +``` + +See [Github OAuth 2.0](#github-oauth-20) for details on how to configure it. + +### SAML + +This connector type implements SAML authentication. It can be configured +against any external identity manager like Okta or Auth0. This feature is +only available for Teleport Enterprise. + +Here is an example of this setting in the `teleport.yaml`: + +```yaml +auth_service: + authentication: + type: saml +``` + +### OIDC + +Teleport implements OpenID Connect (OIDC) authentication, which is similar to +SAML in principle. This feature is only available for Teleport Enterprise. + +Here is an example of this setting in the `teleport.yaml`: + +```yaml +auth_service: + authentication: + type: oidc +``` + + +### FIDO U2F + +Teleport supports [FIDO U2F](https://www.yubico.com/about/background/fido/) +hardware keys as a second authentication factor. By default U2F is disabled. To start using U2F: + +* Enable U2F in Teleport configuration `/etc/teleport.yaml`. +* For CLI-based logins you have to install [u2f-host](https://developers.yubico.com/libu2f-host/) utility. +* For web-based logins you have to use Google Chrome, as it is the only browser supporting U2F at this time. + +```yaml +# snippet from /etc/teleport.yaml to show an example configuration of U2F: +auth_service: + authentication: + type: local + second_factor: u2f + # this section is needed only if second_factor is set to 'u2f' + u2f: + # app_id must point to the URL of the Teleport Web UI (proxy) accessible + # by the end users + app_id: https://localhost:3080 + # facets must list all proxy servers if there are more than one deployed + facets: + - https://localhost:3080 +``` + +For single-proxy setups, the `app_id` setting can be equal to the domain name of the +proxy, but this will prevent you from adding more proxies without changing the +`app_id`. For multi-proxy setups, the `app_id` should be an HTTPS URL pointing to +a JSON file that mirrors `facets` in the auth config. + +!!! warning "Warning": + The `app_id` must never change in the lifetime of the cluster. If the App ID + changes, all existing U2F key registrations will become invalid and all users + who use U2F as the second factor will need to re-register. + When adding a new proxy server, make sure to add it to the list of "facets" + in the configuration file, but also to the JSON file referenced by `app_id` + + +**Logging in with U2F** + +For logging in via the CLI, you must first install [u2f-host](https://developers.yubico.com/libu2f-host/). +Installing: + +```yaml +# OSX: +$ brew install libu2f-host + +# Ubuntu 16.04 LTS: +$ apt-get install u2f-host +``` + +Then invoke `tsh ssh` as usual to authenticate: + +``` +tsh --proxy ssh +``` + +!!! tip "Version Warning": + External user identities are only supported in [Teleport Enterprise](/enterprise/). Please reach + out to `sales@gravitational.com` for more information. From 5729dba2ff9282fd6538d70427bcadd1489287f6 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:23:30 +0300 Subject: [PATCH 07/23] add node join, node label guides, format architecture guides --- docs/4.1.yaml | 3 + docs/4.1/architecture/auth.md | 172 ++++++++++++++--------- docs/4.1/architecture/proxy.md | 35 +++++ docs/4.1/guides/node-join.md | 244 +++++++++++++++++++++++++++++++++ docs/4.1/guides/node-labels.md | 98 +++++++++++++ 5 files changed, 486 insertions(+), 66 deletions(-) create mode 100644 docs/4.1/guides/node-join.md create mode 100644 docs/4.1/guides/node-labels.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index bf37eff40b283..5767778965b16 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -16,6 +16,7 @@ markdown_extensions: - admonition - def_list - footnotes + - codehilite - toc: marker: '[TOC]' extra_css: [] @@ -36,6 +37,8 @@ pages: - Guides: # TODO: Add How-To Guide on Managing Nodes, Users, Trusted Clusters # etc. any common task should have a guide + - Add a Node to a Cluster: guides/node-join.md + - Label Nodes: guides/node-labels.md - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md diff --git a/docs/4.1/architecture/auth.md b/docs/4.1/architecture/auth.md index a9cd68ef737cf..797976dfbcd06 100644 --- a/docs/4.1/architecture/auth.md +++ b/docs/4.1/architecture/auth.md @@ -1,22 +1,41 @@ # Teleport Auth -This is doc about the Teleport Authentication Service and Certificate Management. It explains how Users and Nodes are identified and granted access to Nodes and Services. +This is doc about the Teleport Authentication Service and Certificate +Management. It explains how Users and Nodes are identified and granted access to +Nodes and Services. [TOC] ## Authentication vs. Authorization -Teleport Auth handles both authentication and authorization. These topics are related but different and they are often discussed jointly as "Auth". - -**Authentication** is proving an identity. "I say I am Bob, and I really am Bob. See look I have Bob's purple hat.". The job of an Authentication system is to define the criteria by which users must prove their identity. Is having a purple hat enough to show that a person is Bob? Maybe, maybe not. To identify users and nodes to Teleport Auth we require them to present a cryptographically-signed certificate issued by the Teleport Auth Certificate Authority. - -**Authorization** is proving access to something: "Bob has a purple hat, but also a debit card and the correct PIN code. Bob can access a bank account with the number 814000001344. Can Bob get $20 out of the ATM?". The ATM's Authentication system would validate Bob's PIN Code, while the Authorization system would use a stored mapping from Bob to Account 814000001344 to decide whether Bob could withdraw cash. Authorization defines and determines permissions that users have within a system, such as access to cash within a banking system or data in a filesystem. Before users are granted access to nodes, the Auth Service checks their identity against a stored mapping in a database. +Teleport Auth handles both authentication and authorization. These topics are +related but different and they are often discussed jointly as "Auth". + +**Authentication** is proving an identity. "I say I am Bob, and I really am Bob. +See look I have Bob's purple hat.". The job of an Authentication system is to +define the criteria by which users must prove their identity. Is having a purple +hat enough to show that a person is Bob? Maybe, maybe not. To identify users and +nodes to Teleport Auth we require them to present a cryptographically-signed +certificate issued by the Teleport Auth Certificate Authority. + +**Authorization** is proving access to something: "Bob has a purple hat, but +also a debit card and the correct PIN code. Bob can access a bank account with +the number 814000001344. Can Bob get $20 out of the ATM?". The ATM's +Authentication system would validate Bob's PIN Code, while the Authorization +system would use a stored mapping from Bob to Account 814000001344 to decide +whether Bob could withdraw cash. Authorization defines and determines +permissions that users have within a system, such as access to cash within a +banking system or data in a filesystem. Before users are granted access to +nodes, the Auth Service checks their identity against a stored mapping in a +database. ![Authentication and Authorization](../img/authn_authz.svg) ## SSH Certificates -One can think of an SSH certificate as a "permit" issued and time-stamped by a trusted authority. In this case the authority is the Auth Server's Certificate Authority. A certificate contains four important pieces of data: +One can think of an SSH certificate as a "permit" issued and time-stamped by a +trusted authority. In this case the authority is the Auth Server's Certificate +Authority. A certificate contains four important pieces of data: 1. List of principals (identities) this certificate belongs to. 2. Signature of the certificate authority who issued it. @@ -27,29 +46,49 @@ One can think of an SSH certificate as a "permit" issued and time-stamped by a t Teleport uses SSH certificates to authenticate nodes and users within a cluster. -There are two CAs operating inside the Auth Server because nodes and users each need their own certificates. +There are two CAs operating inside the Auth Server because nodes and users each +need their own certificates. -* The **Node CA** issues certificates which identify a node (i.e. host, server, computer). These certificates are used to add new nodes to a cluster and identify connections coming from the node. -* The **User CA** issues certificates which identify a User. These certificates are used to authenticate users when they try to connect to a cluster node. +* The **Node CA** issues certificates which identify a node (i.e. host, server, + computer). These certificates are used to add new nodes to a cluster and + identify connections coming from the node. +* The **User CA** issues certificates which identify a User. These certificates + are used to authenticate users when they try to connect to a cluster node. ### Issuing Node Certificates -Node Certificates identify a node within a cluster and establish the permissions of the node to access to other Teleport services. The presence of a signed certificate on a node makes it a cluster member. +Node Certificates identify a node within a cluster and establish the permissions +of the node to access to other Teleport services. The presence of a signed +certificate on a node makes it a cluster member. ![Node Joins Cluster](../img/node_join.svg) -1. To join a cluster for the first time, a node must present a "join token" to the auth server. The token can be static (configured via config file) or a dynamic, single-use token generated by [`tctl nodes add`](../cli-docs/#tctl-nodes-add). +1. To join a cluster for the first time, a node must present a "join token" to + the auth server. The token can be static (configured via config file) or a + dynamic, single-use token generated by [`tctl nodes + add`](../cli-docs/#tctl-nodes-add). !!! tip "Token TTL": - When using dynamic tokens, their default time to live (TTL) is 15 minutes, but it can be reduced (not increased) via [`tctl nodes add --ttl`](../cli-docs/#tctl-nodes-add) flag. + When using dynamic tokens, their default time to live (TTL) is 15 + minutes, but it can be reduced (not increased) via + [`tctl nodes add --ttl`](../cli-docs/#tctl-nodes-add) flag. -2. When a new node joins the cluster, the auth server generates a new public/private keypair for the node and signs its certificate. This node certificate contains the node's role(s) (`proxy`, `auth` or `node`) as a certificate extension (opaque signed string). +2. When a new node joins the cluster, the auth server generates a new + public/private keypair for the node and signs its certificate. This node + certificate contains the node's role(s) (`proxy`, `auth` or `node`) as a + certificate extension (opaque signed string). ### Using Node Certificates ![Node Authorization](../img/node_cluster_auth.svg) -All nodes in a cluster can connect to the [Auth Server's API](#auth-api-server) implemented as an HTTP REST service running over the SSH tunnel. This API connection is authenticated with the node certificate and the encoded role is checked to enforce access control. For example, a client connection using a certificate with only the `node` role won't be able to add and delete users. This client connection would only be authorized to get auth servers registered in the cluster. +All nodes in a cluster can connect to the [Auth Server's API](#auth-api-server) + implemented as an HTTP REST service running over the SSH +tunnel. This API connection is authenticated with the node certificate and the +encoded role is checked to enforce access control. For example, a client +connection using a certificate with only the `node` role won't be able to add +and delete users. This client connection would only be authorized to get auth +servers registered in the cluster. ### Issuing User Certificates @@ -59,46 +98,77 @@ The Auth Server uses its User CA to issue user certificates. User certificates are stored on a user's machine in the `~/.tsh/` directory or also by the system's SSH agent if it is running. -1. To get permission to join a cluster for the first time a user must provide their username, password, and 2nd-factor token. Users can log in with [`tsh login`](../cli-docs/#tsh-login) or via the Web UI. The Auth Server check these against its identity storage and checks the 2nd factor token. +1. To get permission to join a cluster for the first time a user must provide + their username, password, and 2nd-factor token. Users can log in with [`tsh + login`](../cli-docs/#tsh-login) or via the Web UI. The Auth Server check + these against its identity storage and checks the 2nd factor token. -2. If the correct credentials were offered, the Auth Server will generate a signed certificate and return it to the client. For users certificates are stored in `~/.tsh` by default. If the client uses the [Web UI](./proxy/#web-ui-to-ssh) the signed certificate is associated with a secure websocket session. +2. If the correct credentials were offered, the Auth Server will generate a + signed certificate and return it to the client. For users certificates are + stored in `~/.tsh` by default. If the client uses the [Web + UI](./proxy/#web-ui-to-ssh) the signed certificate is associated with a + secure websocket session. -In addition to user's identity, user certificates also contain user roles and SSH options, like "permit-agent-forwarding" . +In addition to user's identity, user certificates also contain user roles and +SSH options, like "permit-agent-forwarding" . -This additional data is stored as a certificate extension and is protected by the CA signature. +This additional data is stored as a certificate extension and is protected by +the CA signature. ### Using User Certificates ![Client offers valid certificate](../img/user_auth.svg) -When a client requests to access a node cluster, the Auth Server first checks that a certificate exists and hasn't expired. If it has expired, the client must re-authenticate with their username, password, and 2nd factor. If the certificate is still valid, the Auth Server validates the certificate's signature. +When a client requests to access a node cluster, the Auth Server first checks +that a certificate exists and hasn't expired. If it has expired, the client must +re-authenticate with their username, password, and 2nd factor. If the +certificate is still valid, the Auth Server validates the certificate's +signature. -If it is correct the client is granted access to the cluster. From here, the [Proxy Server](./proxy/#connecting-to-a-node) establishes a connection between client and node. +If it is correct the client is granted access to the cluster. From here, the +[Proxy Server](./proxy/#connecting-to-a-node) establishes a connection between +client and node. ## Certificate Rotation -By default, all user certificates have an expiration date, also known as time to live (TTL). This TTL can be configured by a Teleport administrator. But the node certificates issued by an Auth Server are valid indefinitely by default. +By default, all user certificates have an expiration date, also known as time to +live (TTL). This TTL can be configured by a Teleport administrator. But the node +certificates issued by an Auth Server are valid indefinitely by default. -Teleport supports certificate rotation, i.e. the process of invalidating all previously-issued certificates for nodes _and_ users regardless of their TTL. Certificate rotation is triggered by [`tctl auth rotate`](../cli-docs/#tctl-auth). When this command is invoked by a Teleport administrator on one of cluster's Auth Servers, the following happens: +Teleport supports certificate rotation, i.e. the process of invalidating all +previously-issued certificates for nodes _and_ users regardless of their TTL. +Certificate rotation is triggered by [`tctl auth +rotate`](../cli-docs/#tctl-auth). When this command is invoked by a Teleport +administrator on one of cluster's Auth Servers, the following happens: 1. A new certificate authority (CA) key is generated. -2. The old CA will be considered valid _alongside_ the new CA for some period of time. This period of time is called a _grace period_ -3. During the grace period, all previously issued certificates will be considered valid, assuming their TTL isn't expired. -4. After the grace period is over, the certificates issued by the old CA are no longer accepted. +2. The old CA will be considered valid _alongside_ the new CA for some period of + time. This period of time is called a _grace period_ +3. During the grace period, all previously issued certificates will be + considered valid, assuming their TTL isn't expired. +4. After the grace period is over, the certificates issued by the old CA are no + longer accepted. This process is repeated twice, one for the node CA and once for the user CA. -Take a look at the [Certificate Guide](../admin-guide/#certificate-rotation) to learn how to do certificate rotation in practice. +Take a look at the [Certificate Guide](../admin-guide/#certificate-rotation) to +learn how to do certificate rotation in practice. ## Auth API -Clients can also connect to the auth API through the Teleport proxy to use a limited subset of the API to discover the member nodes of the cluster. +Clients can also connect to the auth API through the Teleport proxy to use a +limited subset of the API to discover the member nodes of the cluster. ## Auth State -The Auth service maintains state using a database of users, credentials, certificates, and audit logs. The default storage location is `/var/lib/teleport` or an [admin-configured storage destination](../admin-guide/#high-availability). +The Auth service maintains state using a database of users, credentials, +certificates, and audit logs. The default storage location is +`/var/lib/teleport` or an [admin-configured storage +destination](../admin-guide/#high-availability). There are three types of data stored by the auth server: @@ -121,8 +191,13 @@ There are three types of data stored by the auth server: ## Audit Log The Teleport auth server keeps the audit log of SSH-related events that take -place on any node with a Teleport cluster. It is important to understand that -the SSH nodes emit audit events and submit them to the auth server. +place on any node with a Teleport cluster. Each node in a cluster emits audit +events and submit them to the auth server. The events recorded include: + +* successful user logins +* node IP addresses +* session time +* session IDs !!! warning "Compatibility Warning": Because all SSH events like `exec` or `session_start` are reported by the @@ -143,41 +218,6 @@ storage. `/var/lib/teleport/log` to allow them to combine all audit events into the same audit log. [Learn how to deploy Teleport in HA Mode.](../admin-guide#high-availability)) -## Recording Proxy Mode - -In this mode, the proxy terminates (decrypts) the SSH connection using the -certificate supplied by the client via SSH agent forwarding and then establishes -its own SSH connection to the final destination server, effectively becoming an -authorized "man in the middle". This allows the proxy server to forward SSH -session data to the auth server to be recorded, as shown below: - -![recording-proxy](../img/recording-proxy.svg?style=grv-image-center-lg) - -The recording proxy mode, although _less secure_, was added to allow Teleport -users to enable session recording for OpenSSH's servers running `sshd`, which is -helpful when gradually transitioning large server fleets to Teleport. - -We consider the "recording proxy mode" to be less secure for two reasons: - -1. It grants additional privileges to the Teleport proxy. In the default mode, - the proxy stores no secrets and cannot "see" the decrypted data. This makes a - proxy less critical to the security of the overall cluster. But if an - attacker gains physical access to a proxy node running in the "recording" - mode, they will be able to see the decrypted traffic and client keys stored - in proxy's process memory. -2. Recording proxy mode requires the SSH agent forwarding. Agent forwarding is - required because without it, a proxy will not be able to establish the 2nd - connection to the destination node. - -However, there are advantages of proxy-based session recording too. When -sessions are recorded at the nodes, a root user can add iptables rules to -prevent sessions logs from reaching the Auth Server. With sessions recorded at -the proxy, users with root privileges on nodes have no way of disabling the -audit. - -See the [admin guide](../admin-guide#recorded-sessions) to learn how to turn on the -recording proxy mode. - ## Storage Back-Ends Different types of cluster data can be configured with different storage diff --git a/docs/4.1/architecture/proxy.md b/docs/4.1/architecture/proxy.md index 986a8adc9ee19..b90772e7f966d 100644 --- a/docs/4.1/architecture/proxy.md +++ b/docs/4.1/architecture/proxy.md @@ -90,6 +90,41 @@ client `ssh` or using `tsh`: [SSH jump hosts](https://wiki.gentoo.org/wiki/SSH_jump_host) implemented using OpenSSH's `ProxyCommand`. also supports OpenSSH's ProxyJump/ssh -J implementation as of Teleport 4.1. +## Recording Proxy Mode + +In this mode, the proxy terminates (decrypts) the SSH connection using the +certificate supplied by the client via SSH agent forwarding and then establishes +its own SSH connection to the final destination server, effectively becoming an +authorized "man in the middle". This allows the proxy server to forward SSH +session data to the auth server to be recorded, as shown below: + +![recording-proxy](../img/recording-proxy.svg?style=grv-image-center-lg) + +The recording proxy mode, although _less secure_, was added to allow Teleport +users to enable session recording for OpenSSH's servers running `sshd`, which is +helpful when gradually transitioning large server fleets to Teleport. + +We consider the "recording proxy mode" to be less secure for two reasons: + +1. It grants additional privileges to the Teleport proxy. In the default mode, + the proxy stores no secrets and cannot "see" the decrypted data. This makes a + proxy less critical to the security of the overall cluster. But if an + attacker gains physical access to a proxy node running in the "recording" + mode, they will be able to see the decrypted traffic and client keys stored + in proxy's process memory. +2. Recording proxy mode requires the SSH agent forwarding. Agent forwarding is + required because without it, a proxy will not be able to establish the 2nd + connection to the destination node. + +However, there are advantages of proxy-based session recording too. When +sessions are recorded at the nodes, a root user can add iptables rules to +prevent sessions logs from reaching the Auth Server. With sessions recorded at +the proxy, users with root privileges on nodes have no way of disabling the +audit. + +See the [admin guide](../admin-guide#recorded-sessions) to learn how to turn on +the recording proxy mode. + ## More Concepts * [Architecture Overview](./architecture) diff --git a/docs/4.1/guides/node-join.md b/docs/4.1/guides/node-join.md new file mode 100644 index 0000000000000..3675f2f2e0692 --- /dev/null +++ b/docs/4.1/guides/node-join.md @@ -0,0 +1,244 @@ +## Adding Nodes to the Cluster + +This guide will show you a few different ways to generate and use join tokens. +Join tokens are used by nodes to prove that they are trusted by an admin and +should be allowed to join a cluster. Once a node has joined a cluster it can +see the IP addresses and labels of other nodes along with Teleport User data. + +[TOC] + +## Recommended Prerequisites + +* Read through the [Architecture Overview](../architecture/overview). +* Read through the [Production Guide](./production) if you are setting up +Teleport in production. +* Run _all_ nodes as [Systemd Units](./production/#systemd-unit-file) unless you +are just working in a sandbox. + +## Step 1: Generate or Set a Join Token + +There are two ways to invite nodes to join a cluster: + +* **Dynamic Tokens**: Most secure +* **Static Tokens**: Less secure + +### Option 1 (Recommended): Generate a dynamic token + +You can generate or set a short-lived token with the +[`tctl`](../cli-docs/#tctl) admin tool. + +We recommend this method rather than static tokens because dynamic tokens +automatically expire, preventing potentially malicious actors from adding nodes +to your cluster. + +The [`tctl nodes add`](../cli-docs/#tctl-nodes-add) command also shows the +current CA Pin, which validates the current private key of the Auth Server +before allowing a node to join the cluster. Read more about CA Pinning in the +[Production Guide](./production/#ca-pinning). + +```bsh +# Set a specific token value that must be used within 5 minutes +$ tctl nodes add --ttl=5m --roles=node --token=secret-token-value +The invite token: secret-token-value +This token will expire in 5 minutes + +Run this on the new node to join the cluster: + +> teleport start \ + --roles=node \ + --token=secret-token-value \ + --ca-pin=sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d \ + --auth-server=10.164.0.7:3025 + +Please note: + + - This invitation token will expire in 5 minutes + - 10.164.0.7:3025 must be reachable from the new node +``` + +If `--token` is not provided `tctl` will generate one. +```bsh +# generate a dynamic invitation token for a new node: +$ tctl nodes add --ttl=5m --roles=node +The invite token: e94d68a8a1e5821dbd79d03a960644f0 +This token will expire in 5 minutes + +Run this on the new node to join the cluster: + +> teleport start \ + --roles=node \ + --token=e94d68a8a1e5821dbd79d03a960644f0 \ + --ca-pin=sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d \ + --auth-server=10.164.0.7:3025 + +Please note: + + - This invitation token will expire in 5 minutes + - 10.164.0.7:3025 must be reachable from the new node +``` + +The command prints out a `teleport start` command which you can run on a node +that you want to add to a cluster. We only recommend this option for sandbox or +staging environments as you are getting started with teleport. + +!!! warning "Resiliency Warning" + If the process fails unexpectedly or the node restarts the node service will not + restart automatically. See how to [add a node with a config + file](#option-1-recommended-run-the-node-service-with-a-config-file) for a more + resilient method. + +### Option 2: Set a static token in config file + +You can set a static token in the `auth_service` section of your configuration +file. The list of `tokens` represent the role(s) of a cluster node and tokens +that they can use. We encourage the use of [dynamic tokens](#option-1-recommended-generate-a-dynamic-token) for security, +but using static token may be the best option for some teams. + +The tokens set in the config will not expire unless they are removed from the +config and the `teleport` daemon is restarted. Anyone with the token and access +to the auth server's network can add nodes to the cluster. + +If you are adding a node using a static token we recommend that you add the +[`ca_pin`](./production/#ca-pinning) key to the `teleport` section on the node +to be added. Here's an example of how this works. + +```bash +# get the CA Pin on the Auth server +$ tctl status +Cluster grav-00 +User CA never updated +Host CA never updated +CA pin sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d +``` + +Edit the yaml config on the Auth Server. + +```diff +# Add static toke "secret-token-value" +# which will allow nodes with role `node` to join +auth_service: + enabled: true + tokens: + # This static token allows new hosts to join the cluster as + # "node" role with the token `secret-token-value`. ++ - "node:secret-token-value" +``` + +You will need to restart teleport for the static token configuration to take +effect. In production we recommend using a [Systemd Unit File](./production) to +manage the `teleport` daemon so we show `systemctl` commands below. If you +are not using `systemctl` currently just `Ctrl-C` to or `kill ` to +kill the current teleport process. + +```bash +$ systemctl reload teleport +# Tail the teleport service logs to confirm that it worked +$ journalctl -fu teleport +``` + + + +## Step 2: Use a Node Join Token + +In the previous step [`tctl nodes add`](../cli-docs/#tctl-nodes-add) printed out +a `teleport start` command which you can run on the node you want to add. You +can use this command for testing, but be cautious! If the process fails +unexpectedly or the node restarts the node service will not restart +automatically. Fix this by running [Teleport as a System +Unit](./production/#systemd-unit-file). + +### Option 1 (Recommended): Run the node service with a config file + +Check that the join token you are using is recognized by the Auth Service +```bash +# run this on an auth node +# here we have a static which will never expire +# and a dynamic token which will expire +$ tctl tokens ls +Token Type Expiry Time (UTC) +-------------------------------- ---- ------------------- +fuzzywuzzywasabear Node never +b150b9349b4ca40bcb4df298a2f50152 Node 17 Oct 19 11:18 UTC +``` + +Add the token and [CA Pin](../production/#ca-pinning) to your config file + +```diff +# Node Service Config +# You may have other config +# this example shows a minimal config +# for running only the `node` service +teleport: ++ auth_token: "fuzzywuzzywasabear" ++ ca_pin: "sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d" + auth_servers: + - 10.164.0.7:3025 +ssh_service: + enabled: "yes" +auth_service: + enabled: "no" +proxy_service: + enabled: "no" +``` + +Save these files and restart/start both services + +```bash +# on auth server +$ systemctl reload teleport +``` + +```bash +# on node which is joining +$ systemctl start teleport +``` + +Use `systemctl status teleport` or `journalctl -u teleport` to check the logs +and make sure there are no errors. + +!!! warning "Certificate Warnings" + If you previously joined the cluster using an old token or certificate + you may see an error `x509: certificate signed by unknown authority`. + This is due to a mismatch between the Auth Server state and the information presented by the node. To resolve it you can remove the node certificate + with `rm -r /var/lib/teleport` on the node and/or + `tctl rm nodes/` on the auth node to make Teleport Auth "forget" + the node and start fresh. + +### Option 2: Start the node service via the CLI + +This option can be used when to quickly add a node to a cluster with minimal +configuration. We only recommend this option for sandbox or staging environments +as you are getting started with teleport. Get the CA Pin of the auth node by +running `tctl status`. + +```bash +# adding a new regular SSH node to the cluster: +$ teleport start --roles=node --token=b150b9349b4ca40bcb4df298a2f50152 +> --auth-server=10.164.0.7:3025 +> --ca-pin=sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d +``` + +## Next Steps + +As new nodes come online, they start sending ping requests every few seconds +to the Auth service. You can see the nodes that have successfully joined +by running [`tctl nodes ls`](../cli-docs/#tctl-nodes-ls) + +```bsh +$ tctl nodes ls + +Node Name Node ID Address Labels +--------- ------- ------- ------ +turing d52527f9-b260-41d0-bb5a-e23b0cfe0f8f 10.1.0.5:3022 distro:ubuntu +dijkstra c9s93fd9-3333-91d3-9999-c9s93fd98f43 10.1.0.6:3022 distro:debian +``` + +!!! tip "Join Tokens are only used for the initial connection" + It is important to understand that join tokens are only used to establish + the connection for the first time. The clusters will exchange certificates + and won't be using the token to re-establish the connection in the future. + Future connections will rely upon the [node certificate](../architecture/auth/#authentication-in-teleport) to identify and authorize a node. + +**More Guides** +* Add Labels to Nodes +* Revoke Tokens diff --git a/docs/4.1/guides/node-labels.md b/docs/4.1/guides/node-labels.md new file mode 100644 index 0000000000000..7f88ea6531e60 --- /dev/null +++ b/docs/4.1/guides/node-labels.md @@ -0,0 +1,98 @@ +## Label Nodes + +[TOC] + +Teleport allows for the application of arbitrary key-value pairs to each node, +called labels. There are two kinds of labels: + +1. `static labels` do not change over time, while `teleport` process is running. + Examples of static labels are physical location of nodes, name of the + environment (staging vs production), etc. + +2. `dynamic labels` also known as "label commands" allow to generate labels at + runtime. Teleport will execute an external command on a node at a + configurable frequency and the output of a command becomes the label value. + Examples include reporting load averages, presence of a process, time after + last reboot, etc. + +There are two ways to configure node labels. + +1. Via command line, by using `--labels` flag to `teleport start` command. +2. Using `/etc/teleport.yaml` configuration file on the nodes. + +## Example 1: Add labels on the command line + +To define labels as command line arguments, use `--labels` flag like shown below +when you start the `teleport --roles=node` service. This method works well for +static labels or simple commands. + +In this example the node will have the static label `env=sanbox`. Teleport will +run `uptime -p` every minute and assign the result to the label key `uptime`. It +will also run `uname -r` and assign it to the label key `kernel` + +```bash +# first kill previous instances of teleport +# with Ctrl-C or kill +$ teleport start --roles=node +> --labels env=sandbox,uptime=[1m:"uptime -p"],kernel=[1h:"uname -r"] +> --token=secret-token-value \ +> --ca-pin=sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d \ +> --auth-server=10.164.0.7:3025 +``` + +!!! warning "Resiliency Warning" + If the process fails unexpectedly or the node restarts the node service will not + restart automatically. Run the `teleport` binary as a [Systemd Unit](./production/#systemd-unit-file) to avoid this. + +## Example 2: Add labels to the config file + +Alternatively, you can update `labels` via a configuration file: + +```yaml +ssh_service: + enabled: "yes" + # Static labels are simple key/value pairs: + labels: + environment: test +``` + +To configure dynamic labels via a configuration file, define a `commands` array +as shown below. The `name` key is the label key: `arch` in the example here. + +```yaml +ssh_service: + enabled: "yes" + # Dynamic labels are listed under "commands": + commands: + - name: arch + command: ['/usr/bin/uname', '-r', '-s'] + # this setting tells teleport to execute the command uname + # once an hour. `period` cannot be less than one minute. + period: 1h0m0s +``` + +`/path/to/executable` must be a valid executable command with the (i.e. +executable bit must be set). If you run the `teleport` daemon as `root` this +should not be an issue, but if `teleport` runs as a non-root user in your system +check the permissions of the executable with `ls -l `. +Modify file permissions from an authorized OS user with the `chmod +x` command. +If the executable is a shell script it must have a proper [shebang +line](https://en.wikipedia.org/wiki/Shebang_(Unix)). + +**Syntax Tip:** notice that `command` setting is an array where the first element +is a valid executable and each subsequent element is an argument, i.e: + +```yaml +# valid syntax: +command: ["/bin/uname", "-m"] + +# INVALID syntax: +command: ["/bin/uname -m"] + +# if you want to pipe several bash commands together, here's how to do it: +# notice how ' and " are interchangeable and you can use it for quoting: +command: ["/bin/sh", "-c", "uname -a | egrep -o '[0-9]+\.[0-9]+\.[0-9]+'"] +``` +**More Guides** +* Add Nodes to a Cluster +* Revoke Tokens From 2812570b4fc708318153046d0a52cf8aaa678fec Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:24:44 +0300 Subject: [PATCH 08/23] add wip token management guide --- docs/4.1.yaml | 1 + docs/4.1/guides/token-management.md | 41 +++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+) create mode 100644 docs/4.1/guides/token-management.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index 5767778965b16..9e9937baeac9a 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -39,6 +39,7 @@ pages: # etc. any common task should have a guide - Add a Node to a Cluster: guides/node-join.md - Label Nodes: guides/node-labels.md + - Manage Tokens: guides/token-management.md - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md diff --git a/docs/4.1/guides/token-management.md b/docs/4.1/guides/token-management.md new file mode 100644 index 0000000000000..3d4502ed7ddd3 --- /dev/null +++ b/docs/4.1/guides/token-management.md @@ -0,0 +1,41 @@ +## Manage Tokens + +TODO: WIP + +This guide will show you how to add, remove, and view active tokens. + +There are several kinds of tokens that you will see as an admin: + +* Node Join Tokens +* User Join Tokens +* Trusted Cluster Tokens + +## Revoking Invitations + +As you have seen above, Teleport uses tokens to invite users to a cluster (sign-up tokens) or +to add new nodes to it (provisioning tokens). + +Both types of tokens can be revoked before they can be used. To see a list of outstanding tokens, +run this command: + +```bsh +$ tctl tokens ls + +Token Role Expiry Time (UTC) +----- ---- ----------------- +eoKoh0caiw6weoGupahgh6Wuo7jaTee2 Proxy never +696c0471453e75882ff70a761c1a8bfa Node 17 May 16 03:51 UTC +6fc5545ab78c2ea978caabef9dbd08a5 Signup 17 May 16 04:24 UTC +``` + +In this example, the first token has a "never" expiry date because it is a static token configured via a config file. + +The 2nd token with "Node" role was generated to invite a new node to this cluster. And the +3rd token was generated to invite a new user. + +The latter two tokens can be deleted (revoked) via `tctl tokens del` command: + +```yaml +$ tctl tokens del 696c0471453e75882ff70a761c1a8bfa +Token 696c0471453e75882ff70a761c1a8bfa has been deleted +``` \ No newline at end of file From fe89098473248fe14af215fed7f289941fd12025 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:33:26 +0300 Subject: [PATCH 09/23] add wip trusted cluster guide, rm guide content from admin-guide --- docs/4.1.yaml | 1 + docs/4.1/admin-guide.md | 281 +--------------------------- docs/4.1/guides/trusted-clusters.md | 56 ++++++ 3 files changed, 59 insertions(+), 279 deletions(-) create mode 100644 docs/4.1/guides/trusted-clusters.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index 9e9937baeac9a..640334a091be0 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -40,6 +40,7 @@ pages: - Add a Node to a Cluster: guides/node-join.md - Label Nodes: guides/node-labels.md - Manage Tokens: guides/token-management.md + - Add a Trusted Cluster: guides/trusted-clusters.md - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md diff --git a/docs/4.1/admin-guide.md b/docs/4.1/admin-guide.md index 6a24c7a1b5d36..7f97ed20eee50 100644 --- a/docs/4.1/admin-guide.md +++ b/docs/4.1/admin-guide.md @@ -2,6 +2,8 @@ ## Adding and Deleting Users + + This section covers internal user identities, i.e.user accounts created and stored in Teleport's internal storage. Most production users of Teleport use _external_ users via [Github](#github-oauth-20) or [Okta](ssh_okta) or any other @@ -95,285 +97,6 @@ Some fields in the user record are reserved for internal use. Some of them will be finalized and documented in the future versions. Fields like `is_locked` or `traits/logins` can be used starting in version 2.3 -## Adding Nodes to the Cluster - -Teleport is a "clustered" system, meaning it only allows access to nodes -(servers) that had been previously granted cluster membership. - -A cluster membership means that a node receives its own host certificate signed -by the cluster's auth server. To receive a host certificate upon joining a -cluster, a new Teleport host must present an "invite token". An invite token -also defines which role a new host can assume within a cluster: `auth` , `proxy` -or `node` . - -There are two ways to create invitation tokens: - -* **Static Tokens** are easy to use and somewhat less secure. -* **Dynamic Tokens** are more secure but require more planning. - -### Static Tokens - -Static tokens are defined ahead of time by an administrator and stored in the -auth server's config file: - -``` yaml -# Config section in `/etc/teleport.yaml` file for the auth server -auth_service: - enabled: true - tokens: - # This static token allows new hosts to join the cluster as "proxy" or "node" - - - "proxy,node:secret-token-value" - - # A token can also be stored in a file. In this example the token for adding - # new auth servers is stored in /path/to/tokenfile - - - "auth:/path/to/tokenfile" - -``` - -### Short-lived Tokens - -A more secure way to add nodes to a cluster is to generate tokens as they are -needed. Such token can be used multiple times until its time to live (TTL) -expires. - -Use the [ `tctl` ](../cli-docs/#tctl) tool to register a new invitation token (or -it can also generate a new token for you). In the following example a new token -is created with a TTL of 5 minutes: - -``` bsh -$ tctl nodes add --ttl=5m --roles=node,proxy --token=secret-value -The invite token: secret-value -``` - -If `--token` is not provided, [ `tctl` ](../cli-docs/#tctl) will generate one: - -``` bsh -# generate a short-lived invitation token for a new node: -$ tctl nodes add --ttl=5m --roles=node,proxy -The invite token: e94d68a8a1e5821dbd79d03a960644f0 - -# you can also list all generated non-expired tokens: -$ tctl tokens ls -Token Type Expiry Time ---------------- ----------- --------------- -e94d68a8a1e5821dbd79d03a960644f0 Node 25 Sep 18 00:21 UTC - -# ... or revoke an invitation before it's used: -$ tctl tokens rm e94d68a8a1e5821dbd79d03a960644f0 -``` - -### Using Node Invitation Tokens - -Both static and short-lived tokens are used the same way. Execute the following -command on a new node to add it to a cluster: - -``` bsh -# adding a new regular SSH node to the cluster: -$ teleport start --roles=node --token=secret-token-value --auth-server=10.0.10.5 - -# adding a new regular SSH node using Teleport Node Tunneling: -$ teleport start --roles=node --token=secret-token-value --auth-server=teleport-proxy.example.com:3080 - -# adding a new proxy service on the cluster: -$ teleport start --roles=proxy --token=secret-token-value --auth-server=10.0.10.5 -``` - -As new nodes come online, they start sending ping requests every few seconds to -the CA of the cluster. This allows users to explore cluster membership and size: - -``` bsh -$ tctl nodes ls - -Node Name Node ID Address Labels ---------- ------- ------- ------ -turing d52527f9-b260-41d0-bb5a-e23b0cfe0f8f 10.1.0.5:3022 distro:ubuntu -dijkstra c9s93fd9-3333-91d3-9999-c9s93fd98f43 10.1.0.6:3022 distro:debian -``` - -### Untrusted Auth Servers - -Teleport nodes use the HTTPS protocol to offer the join tokens to the auth -server running on `10.0.10.5` in the example above. In a zero-trust environment, -you must assume that an attacker can highjack the IP address of the auth server -e.g. `10.0.10.5` . - -To prevent this from happening, you need to supply every new node with an -additional bit of information about the auth server. This technique is called -"CA Pinning". It works by asking the auth server to produce a "CA Pin", which is -a hashed value of it's private key, i.e.it cannot be forged by an attacker. - -On the auth server: - -``` bash -$ tctl status -Cluster staging.example.com -User CA never updated -Host CA never updated -CA pin sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 -``` - -The "CA pin" at the bottom needs to be passed to the new nodes when they're -starting for the first time, i.e.when they join a cluster: - -Via CLI: - -``` bash -$ teleport start \ - --roles=node \ - --token=1ac590d36493acdaa2387bc1c492db1a \ - --ca-pin=sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 \ - --auth-server=10.12.0.6:3025 -``` - -or via `/etc/teleport.yaml` on a node: - -``` yaml -teleport: - auth_token: "1ac590d36493acdaa2387bc1c492db1a" - ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" - auth_servers: - - - "10.12.0.6:3025" - -``` - -!!! warning "Warning": - If a CA pin not provided, Teleport node will join a - cluster but it will print a `WARN` message (warning) into it's standard - error output. - -!!! warning "Warning": - The CA pin becomes invalid if a Teleport administrator - performs the CA rotation by executing - [ `tctl auth rotate` ](../cli-docs/#tctl-auth-rotate) . - -## Revoking Invitations - -As you have seen above, Teleport uses tokens to invite users to a cluster -(sign-up tokens) or to add new nodes to it (provisioning tokens). - -Both types of tokens can be revoked before they can be used. To see a list of -outstanding tokens, run this command: - -``` bsh -$ tctl tokens ls - -Token Role Expiry Time (UTC) ------ ---- ----------------- -eoKoh0caiw6weoGupahgh6Wuo7jaTee2 Proxy never -696c0471453e75882ff70a761c1a8bfa Node 17 May 16 03:51 UTC -6fc5545ab78c2ea978caabef9dbd08a5 Signup 17 May 16 04:24 UTC -``` - -In this example, the first token has a "never" expiry date because it is a -static token configured via a config file. - -The 2nd token with "Node" role was generated to invite a new node to this -cluster. And the 3rd token was generated to invite a new user. - -The latter two tokens can be deleted (revoked) via [`tctl tokens -del`](../cli-docs/#tctl-tokens-rm) command: - -``` yaml -$ tctl tokens del 696c0471453e75882ff70a761c1a8bfa -Token 696c0471453e75882ff70a761c1a8bfa has been deleted -``` - -## Labeling Nodes - -In addition to specifying a custom nodename, Teleport also allows for the -application of arbitrary key:value pairs to each node, called labels. There are -two kinds of labels: - -1. `static labels` do not change over time, while - [ `teleport` ](../cli-docs/#teleport) process is running. - Examples of static labels are physical location of nodes, name of the - environment (staging vs production), etc. - -2. `dynamic labels` also known as "label commands" allow to generate labels at - runtime. Teleport will execute an external command on a node at a - configurable frequency and the output of a command becomes the label value. - Examples include reporting load averages, presence of a process, time after - last reboot, etc. - -There are two ways to configure node labels. - -1. Via command line, by using `--labels` flag to `teleport start` command. -2. Using `/etc/teleport.yaml` configuration file on the nodes. - -To define labels as command line arguments, use `--labels` flag like shown -below. This method works well for static labels or simple commands: - -``` yaml -$ teleport start --labels uptime=[1m:"uptime -p"],kernel=[1h:"uname -r"] -``` - -Alternatively, you can update `labels` via a configuration file: - -``` yaml -ssh_service: - enabled: "yes" - # Static labels are simple key/value pairs: - labels: - environment: test -``` - -To configure dynamic labels via a configuration file, define a `commands` array -as shown below: - -``` yaml -ssh_service: - enabled: "yes" - # Dynamic labels AKA "commands": - commands: - - + name: arch - - command: ['/path/to/executable', 'flag1', 'flag2'] - # this setting tells teleport to execute the command above - # once an hour. this value cannot be less than one minute. - period: 1h0m0s -``` - -`/path/to/executable` must be a valid executable command (i.e.executable bit -must be set) which also includes shell scripts with a proper [shebang -line](https://en.wikipedia.org/wiki/Shebang_(Unix)). - -**Important:** notice that `command` setting is an array where the first element -is a valid executable and each subsequent element is an argument, i.e: - -``` yaml -# valid syntax: -command: ["/bin/uname", "-m"] - -# INVALID syntax: -command: ["/bin/uname -m"] - -# if you want to pipe several bash commands together, here's how to do it: -# notice how ' and " are interchangeable and you can use it for quoting: -command: ["/bin/sh", "-c", "uname -a | egrep -o '[0-9]+\.[0-9]+\.[0-9]+'"] -``` - -## Audit Log - -Teleport logs every SSH event into its audit log. There are two components of -the audit log: - -1. **SSH Events:** Teleport logs events like successful user logins along with - - the metadata like remote IP address, time and the session ID. - -2. **Recorded Sessions:** Every SSH shell session is recorded and can be - - replayed later. The recording is done by the nodes themselves, by default, - but can be configured to be done by the proxy. - -Refer to the ["Audit Log" chapter in the Teleport -Architecture](architecture#audit-log) to learn more about how the audit Log and -session recording are designed. - ### SSH Events Teleport supports multiple storage back-ends for storing the SSH events. The diff --git a/docs/4.1/guides/trusted-clusters.md b/docs/4.1/guides/trusted-clusters.md new file mode 100644 index 0000000000000..0a90000e0eae0 --- /dev/null +++ b/docs/4.1/guides/trusted-clusters.md @@ -0,0 +1,56 @@ +### Untrusted Auth Servers + +Teleport nodes use the HTTPS protocol to offer the join tokens to the auth +server running on `10.0.10.5` in the example above. In a zero-trust environment, +you must assume that an attacker can highjack the IP address of the auth server +e.g. `10.0.10.5` . + +To prevent this from happening, you need to supply every new node with an +additional bit of information about the auth server. This technique is called +"CA Pinning". It works by asking the auth server to produce a "CA Pin", which is +a hashed value of it's private key, i.e.it cannot be forged by an attacker. + +On the auth server: + +``` bash +$ tctl status +Cluster staging.example.com +User CA never updated +Host CA never updated +CA pin sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 +``` + +The "CA pin" at the bottom needs to be passed to the new nodes when they're +starting for the first time, i.e.when they join a cluster: + +Via CLI: + +``` bash +$ teleport start \ + --roles=node \ + --token=1ac590d36493acdaa2387bc1c492db1a \ + --ca-pin=sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 \ + --auth-server=10.12.0.6:3025 +``` + +or via `/etc/teleport.yaml` on a node: + +``` yaml +teleport: + auth_token: "1ac590d36493acdaa2387bc1c492db1a" + ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" + auth_servers: + + - "10.12.0.6:3025" + +``` + +!!! warning "Warning": + If a CA pin not provided, Teleport node will join a + cluster but it will print a `WARN` message (warning) into it's standard + error output. + +!!! warning "Warning": + The CA pin becomes invalid if a Teleport administrator + performs the CA rotation by executing + [ `tctl auth rotate` ](../cli-docs/#tctl-auth-rotate) . \ No newline at end of file From ddbe7efa59b030ef42fb7a4390192d9dff50fd0c Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:44:06 +0300 Subject: [PATCH 10/23] add todos to production --- docs/4.1/production.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/4.1/production.md b/docs/4.1/production.md index b4ba0fe8dadbd..4965a0a0f3f69 100644 --- a/docs/4.1/production.md +++ b/docs/4.1/production.md @@ -1,8 +1,8 @@ # Production Guide -Minimal Config example +TODO: This is WIP document -Include security considerations. Address vulns in quay? + [TOC] @@ -146,6 +146,10 @@ use them to add nodes. ## Security Considerations + + + + ### CA Pinning Teleport nodes use the HTTPS protocol to offer the join tokens to the auth From a70357d8875574c4c01d9750d6d214179079d0ba Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:50:26 +0300 Subject: [PATCH 11/23] format auth guide --- docs/4.1/architecture/auth.md | 137 ++++++++++++++++++++++++++-------- 1 file changed, 106 insertions(+), 31 deletions(-) diff --git a/docs/4.1/architecture/auth.md b/docs/4.1/architecture/auth.md index a9cd68ef737cf..7cd18acadd86b 100644 --- a/docs/4.1/architecture/auth.md +++ b/docs/4.1/architecture/auth.md @@ -1,22 +1,41 @@ # Teleport Auth -This is doc about the Teleport Authentication Service and Certificate Management. It explains how Users and Nodes are identified and granted access to Nodes and Services. +This is doc about the Teleport Authentication Service and Certificate +Management. It explains how Users and Nodes are identified and granted access to +Nodes and Services. [TOC] ## Authentication vs. Authorization -Teleport Auth handles both authentication and authorization. These topics are related but different and they are often discussed jointly as "Auth". - -**Authentication** is proving an identity. "I say I am Bob, and I really am Bob. See look I have Bob's purple hat.". The job of an Authentication system is to define the criteria by which users must prove their identity. Is having a purple hat enough to show that a person is Bob? Maybe, maybe not. To identify users and nodes to Teleport Auth we require them to present a cryptographically-signed certificate issued by the Teleport Auth Certificate Authority. - -**Authorization** is proving access to something: "Bob has a purple hat, but also a debit card and the correct PIN code. Bob can access a bank account with the number 814000001344. Can Bob get $20 out of the ATM?". The ATM's Authentication system would validate Bob's PIN Code, while the Authorization system would use a stored mapping from Bob to Account 814000001344 to decide whether Bob could withdraw cash. Authorization defines and determines permissions that users have within a system, such as access to cash within a banking system or data in a filesystem. Before users are granted access to nodes, the Auth Service checks their identity against a stored mapping in a database. +Teleport Auth handles both authentication and authorization. These topics are +related but different and they are often discussed jointly as "Auth". + +**Authentication** is proving an identity. "I say I am Bob, and I really am Bob. +See look I have Bob's purple hat.". The job of an Authentication system is to +define the criteria by which users must prove their identity. Is having a purple +hat enough to show that a person is Bob? Maybe, maybe not. To identify users and +nodes to Teleport Auth we require them to present a cryptographically-signed +certificate issued by the Teleport Auth Certificate Authority. + +**Authorization** is proving access to something: "Bob has a purple hat, but +also a debit card and the correct PIN code. Bob can access a bank account with +the number 814000001344. Can Bob get $20 out of the ATM?". The ATM's +Authentication system would validate Bob's PIN Code, while the Authorization +system would use a stored mapping from Bob to Account 814000001344 to decide +whether Bob could withdraw cash. Authorization defines and determines +permissions that users have within a system, such as access to cash within a +banking system or data in a filesystem. Before users are granted access to +nodes, the Auth Service checks their identity against a stored mapping in a +database. ![Authentication and Authorization](../img/authn_authz.svg) ## SSH Certificates -One can think of an SSH certificate as a "permit" issued and time-stamped by a trusted authority. In this case the authority is the Auth Server's Certificate Authority. A certificate contains four important pieces of data: +One can think of an SSH certificate as a "permit" issued and time-stamped by a +trusted authority. In this case the authority is the Auth Server's Certificate +Authority. A certificate contains four important pieces of data: 1. List of principals (identities) this certificate belongs to. 2. Signature of the certificate authority who issued it. @@ -27,29 +46,49 @@ One can think of an SSH certificate as a "permit" issued and time-stamped by a t Teleport uses SSH certificates to authenticate nodes and users within a cluster. -There are two CAs operating inside the Auth Server because nodes and users each need their own certificates. +There are two CAs operating inside the Auth Server because nodes and users each +need their own certificates. -* The **Node CA** issues certificates which identify a node (i.e. host, server, computer). These certificates are used to add new nodes to a cluster and identify connections coming from the node. -* The **User CA** issues certificates which identify a User. These certificates are used to authenticate users when they try to connect to a cluster node. +* The **Node CA** issues certificates which identify a node (i.e. host, server, + computer). These certificates are used to add new nodes to a cluster and + identify connections coming from the node. +* The **User CA** issues certificates which identify a User. These certificates + are used to authenticate users when they try to connect to a cluster node. ### Issuing Node Certificates -Node Certificates identify a node within a cluster and establish the permissions of the node to access to other Teleport services. The presence of a signed certificate on a node makes it a cluster member. +Node Certificates identify a node within a cluster and establish the permissions +of the node to access to other Teleport services. The presence of a signed +certificate on a node makes it a cluster member. ![Node Joins Cluster](../img/node_join.svg) -1. To join a cluster for the first time, a node must present a "join token" to the auth server. The token can be static (configured via config file) or a dynamic, single-use token generated by [`tctl nodes add`](../cli-docs/#tctl-nodes-add). +1. To join a cluster for the first time, a node must present a "join token" to + the auth server. The token can be static (configured via config file) or a + dynamic, single-use token generated by [`tctl nodes + add`](../cli-docs/#tctl-nodes-add). !!! tip "Token TTL": - When using dynamic tokens, their default time to live (TTL) is 15 minutes, but it can be reduced (not increased) via [`tctl nodes add --ttl`](../cli-docs/#tctl-nodes-add) flag. + When using dynamic tokens, their default time to live (TTL) is 15 + minutes, but it can be reduced (not increased) via + [`tctl nodes add --ttl`](../cli-docs/#tctl-nodes-add) flag. -2. When a new node joins the cluster, the auth server generates a new public/private keypair for the node and signs its certificate. This node certificate contains the node's role(s) (`proxy`, `auth` or `node`) as a certificate extension (opaque signed string). +2. When a new node joins the cluster, the auth server generates a new + public/private keypair for the node and signs its certificate. This node + certificate contains the node's role(s) (`proxy`, `auth` or `node`) as a + certificate extension (opaque signed string). ### Using Node Certificates ![Node Authorization](../img/node_cluster_auth.svg) -All nodes in a cluster can connect to the [Auth Server's API](#auth-api-server) implemented as an HTTP REST service running over the SSH tunnel. This API connection is authenticated with the node certificate and the encoded role is checked to enforce access control. For example, a client connection using a certificate with only the `node` role won't be able to add and delete users. This client connection would only be authorized to get auth servers registered in the cluster. +All nodes in a cluster can connect to the [Auth Server's API](#auth-api-server) + implemented as an HTTP REST service running over the SSH +tunnel. This API connection is authenticated with the node certificate and the +encoded role is checked to enforce access control. For example, a client +connection using a certificate with only the `node` role won't be able to add +and delete users. This client connection would only be authorized to get auth +servers registered in the cluster. ### Issuing User Certificates @@ -59,46 +98,77 @@ The Auth Server uses its User CA to issue user certificates. User certificates are stored on a user's machine in the `~/.tsh/` directory or also by the system's SSH agent if it is running. -1. To get permission to join a cluster for the first time a user must provide their username, password, and 2nd-factor token. Users can log in with [`tsh login`](../cli-docs/#tsh-login) or via the Web UI. The Auth Server check these against its identity storage and checks the 2nd factor token. +1. To get permission to join a cluster for the first time a user must provide + their username, password, and 2nd-factor token. Users can log in with [`tsh + login`](../cli-docs/#tsh-login) or via the Web UI. The Auth Server check + these against its identity storage and checks the 2nd factor token. -2. If the correct credentials were offered, the Auth Server will generate a signed certificate and return it to the client. For users certificates are stored in `~/.tsh` by default. If the client uses the [Web UI](./proxy/#web-ui-to-ssh) the signed certificate is associated with a secure websocket session. +2. If the correct credentials were offered, the Auth Server will generate a + signed certificate and return it to the client. For users certificates are + stored in `~/.tsh` by default. If the client uses the [Web + UI](./proxy/#web-ui-to-ssh) the signed certificate is associated with a + secure websocket session. -In addition to user's identity, user certificates also contain user roles and SSH options, like "permit-agent-forwarding" . +In addition to user's identity, user certificates also contain user roles and +SSH options, like "permit-agent-forwarding" . -This additional data is stored as a certificate extension and is protected by the CA signature. +This additional data is stored as a certificate extension and is protected by +the CA signature. ### Using User Certificates ![Client offers valid certificate](../img/user_auth.svg) -When a client requests to access a node cluster, the Auth Server first checks that a certificate exists and hasn't expired. If it has expired, the client must re-authenticate with their username, password, and 2nd factor. If the certificate is still valid, the Auth Server validates the certificate's signature. +When a client requests to access a node cluster, the Auth Server first checks +that a certificate exists and hasn't expired. If it has expired, the client must +re-authenticate with their username, password, and 2nd factor. If the +certificate is still valid, the Auth Server validates the certificate's +signature. -If it is correct the client is granted access to the cluster. From here, the [Proxy Server](./proxy/#connecting-to-a-node) establishes a connection between client and node. +If it is correct the client is granted access to the cluster. From here, the +[Proxy Server](./proxy/#connecting-to-a-node) establishes a connection between +client and node. ## Certificate Rotation -By default, all user certificates have an expiration date, also known as time to live (TTL). This TTL can be configured by a Teleport administrator. But the node certificates issued by an Auth Server are valid indefinitely by default. +By default, all user certificates have an expiration date, also known as time to +live (TTL). This TTL can be configured by a Teleport administrator. But the node +certificates issued by an Auth Server are valid indefinitely by default. -Teleport supports certificate rotation, i.e. the process of invalidating all previously-issued certificates for nodes _and_ users regardless of their TTL. Certificate rotation is triggered by [`tctl auth rotate`](../cli-docs/#tctl-auth). When this command is invoked by a Teleport administrator on one of cluster's Auth Servers, the following happens: +Teleport supports certificate rotation, i.e. the process of invalidating all +previously-issued certificates for nodes _and_ users regardless of their TTL. +Certificate rotation is triggered by [`tctl auth +rotate`](../cli-docs/#tctl-auth). When this command is invoked by a Teleport +administrator on one of cluster's Auth Servers, the following happens: 1. A new certificate authority (CA) key is generated. -2. The old CA will be considered valid _alongside_ the new CA for some period of time. This period of time is called a _grace period_ -3. During the grace period, all previously issued certificates will be considered valid, assuming their TTL isn't expired. -4. After the grace period is over, the certificates issued by the old CA are no longer accepted. +2. The old CA will be considered valid _alongside_ the new CA for some period of + time. This period of time is called a _grace period_ +3. During the grace period, all previously issued certificates will be + considered valid, assuming their TTL isn't expired. +4. After the grace period is over, the certificates issued by the old CA are no + longer accepted. This process is repeated twice, one for the node CA and once for the user CA. -Take a look at the [Certificate Guide](../admin-guide/#certificate-rotation) to learn how to do certificate rotation in practice. +Take a look at the [Certificate Guide](../admin-guide/#certificate-rotation) to +learn how to do certificate rotation in practice. ## Auth API -Clients can also connect to the auth API through the Teleport proxy to use a limited subset of the API to discover the member nodes of the cluster. +Clients can also connect to the auth API through the Teleport proxy to use a +limited subset of the API to discover the member nodes of the cluster. ## Auth State -The Auth service maintains state using a database of users, credentials, certificates, and audit logs. The default storage location is `/var/lib/teleport` or an [admin-configured storage destination](../admin-guide/#high-availability). +The Auth service maintains state using a database of users, credentials, +certificates, and audit logs. The default storage location is +`/var/lib/teleport` or an [admin-configured storage +destination](../admin-guide/#high-availability). There are three types of data stored by the auth server: @@ -121,8 +191,13 @@ There are three types of data stored by the auth server: ## Audit Log The Teleport auth server keeps the audit log of SSH-related events that take -place on any node with a Teleport cluster. It is important to understand that -the SSH nodes emit audit events and submit them to the auth server. +place on any node with a Teleport cluster. Each node in a cluster emits audit +events and submit them to the auth server. The events recorded include: + +* successful user logins +* node IP addresses +* session time +* session IDs !!! warning "Compatibility Warning": Because all SSH events like `exec` or `session_start` are reported by the From c2cc36d948e424c959c677208299366abe97eab9 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Mon, 14 Oct 2019 14:35:41 +0300 Subject: [PATCH 12/23] wip: production firewall rules --- docs/4.1.yaml | 20 +++++- docs/4.1/guides/production.md | 120 ++++++++++++++++++++++++++++++++++ 2 files changed, 138 insertions(+), 2 deletions(-) create mode 100644 docs/4.1/guides/production.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index 1d5a1af6b1577..2b5f58d6ee265 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -30,8 +30,24 @@ pages: - Admin Manual: admin-guide.md - FAQ: faq.md - Guides: - - AWS: aws_oss_guide.md - - Installation: installation.md + - Quickstart: guides/quickstart.md + - Installation: guides/installation.md + - Production: guides/production.md + - Share Sessions: guides/session-sharing.md + - Replay Sessions: guides/audit-replay.md + - Manage User Permissions: guides/user-permissions.md + - Label Nodes: guides/node-labels.md + - Teleport with OpenSSH: guides/openssh.md + - Trusted Clusters: trustedclusters.md + - AWS: aws_oss.md + - Concepts: + - Teleport Basic Concepts: concepts/basics.md + - Teleport Users: concepts/users.md + - Teleport Nodes: concepts/nodes.md + - Teleport Auth: concepts/auth.md + - Teleport Proxy: concepts/proxy.md + - Architecture: concepts/architecture.md + - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md - OneLogin: ssh_one_login.md diff --git a/docs/4.1/guides/production.md b/docs/4.1/guides/production.md new file mode 100644 index 0000000000000..5875ded0a891e --- /dev/null +++ b/docs/4.1/guides/production.md @@ -0,0 +1,120 @@ +# Production Guide + +TODO Build off Quickstart, but include many more details and multi-node set up. + +Minimal Config example + +Include security considerations. Address vulns in quay? + +[TOC] + +## Prerequisites + +* Read about [Teleport Basics](../concepts/basics) +* Read through the [Installation Guide](../guides/installation) to see the available packages and binaries available. +* Read the CLI Docs for [`teleport`](../cli-docs/#teleport) + +## Designing Your Cluster + +Before installing anything there are a few things you should think about. + +* Where will you host Teleport + * On-premises + * Cloud VMs such as AWS EC2 or GCE + * An existing Kubernetes Cluster +* What does your existing network configuration look like? + * Are you able to administer the network firewall rules yourself or do you need to work with a network admin? + * Are these nodes accessible to the public Internet or behind NAT? +* Which users ([Roles or ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) on k8s) are set up on the existing system? + * Can you add new users or Roles yourself or do you need to work with a system admin? + +## Firewall Configuration + +Teleport services listen on several ports. This table shows the default port numbers. + +|Port | Service | Description | Ingress | Egress +|----------|------------|-------------|---------|---------- +| 3080 | Proxy | HTTPS port clients connect to. Used to authenticate `tsh` users and web users into the cluster. | Allow inbound connections from HTTP and SSH clients.| Allow outbound connections to HTTP and SSH clients. +| 3023 | Proxy | SSH port clients connect to after authentication. A proxy will forward this connection to port `3022` on the destination node. | Allow inbound traffic from SSH clients. | Allow outbound traffic to SSH clients. +| 3022 | Node | SSH port to the Node Service. This is Teleport's equivalent of port `22` for SSH. | Allow inbound traffic from proxy host. | Allow outbound traffic to the proxy host. +| 3025 | Auth | SSH port used by the Auth Service to serve its Auth API to other nodes in a cluster. | Allow inbound connections from all cluster nodes. | Allow outbound traffic to cluster nodes. +| 3024 | Proxy | SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server. | | + + + + +## Installation + +First + +## Running Teleport in Production + +### Systemd Unit File + +In production, we recommend starting teleport daemon via an init system like +`systemd`. If systemd and unit files are new to you check out [this helpful guide](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). Here's the recommended Teleport service unit file for systemd. + + +```yaml +[Unit] +Description=Teleport SSH Service +After=network.target + +[Service] +Type=simple +Restart=on-failure +# Set the nodes roles with the `--roles` +# In most production environments you will not +# want to run all three roles on a single host +# proxy,auth,node is the default value if none is set +ExecStart=/usr/local/bin/teleport start --roles=auth --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid +ExecReload=/bin/kill -HUP $MAINPID +PIDFile=/var/run/teleport.pid + +[Install] +WantedBy=multi-user.target +``` + +There are a couple of important things to notice about this file: + +1. The start command in the unit file specifies `--config` as a file and there are very few flags passed to the `teleport` binary. Most of the configuration for Teleport should be done in the [configuration file](../configuration). + +2. The **ExecReload** command allows admins to run `systemctl reload teleport`. This will attempt to perform a graceful restart of _*but it only works if network-based backend storage like [DynamoDB](../configuration/#storage) or [etc 3.3](../configuration/#storage) is configured*_. Graceful Restarts will fork a new process to handle new incoming requests and leave the old daemon process running until existing clients disconnect. + +You can also perform restarts/upgrades by sending `kill` signals +to a Teleport daemon manually. + +| Signal | Teleport Daemon Behavior +|-------------------------|--------------------------------------- +| `USR1` | Dumps diagnostics/debugging information into syslog. +| `TERM`, `INT` or `KILL` | Immediate non-graceful shutdown. All existing connections will be dropped. +| `USR2` | Forks a new Teleport daemon to serve new connections. +| `HUP` | Forks a new Teleport daemon to serve new connections **and** initiates the graceful shutdown of the existing process when there are no more clients connected to it. + + + +This will copy Teleport binaries to `/usr/local/bin`. + +Let's start Teleport. First, create a directory for Teleport +to keep its data. By default it's `/var/lib/teleport`. Then start `teleport` daemon: + +```bash +$ sudo teleport start +``` + +!!! danger "WARNING": + Teleport stores data in `/var/lib/teleport`. Make sure that regular/non-admin users do not + have access to this folder on the Auth server. + + +If you are logged in as `root` you may want to create a new OS-level user first. On linux create a new user called `` with the following commands: +```bash +$ adduser +$ su +``` + +Security considerations on installing tctl under root or not + +!!! danger "WARNING": + Teleport stores data in `/var/lib/teleport`. Make sure that regular/non-admin users do not + have access to this folder on the Auth server.--> From 7defcc5494892e43b98e53de36eb0d3b1d21ee20 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Mon, 14 Oct 2019 14:45:38 +0300 Subject: [PATCH 13/23] reorg admin guide --- docs/4.1/admin-guide.md | 109 ++++------------------------------------ 1 file changed, 9 insertions(+), 100 deletions(-) diff --git a/docs/4.1/admin-guide.md b/docs/4.1/admin-guide.md index 800e6153de4e1..628f6c505d806 100644 --- a/docs/4.1/admin-guide.md +++ b/docs/4.1/admin-guide.md @@ -63,99 +63,13 @@ $ curl https://get.gravitational.com/teleport-v4.0.8-darwin-amd64-bin.tar.gz.sha 0826a17b440ac20d4c38ade3d0a5eb1c62a00c4d5eb88e60b5ea627d426aaed2 teleport-v4.0.8-darwin-amd64-bin.tar.gz ``` -## Definitions - -Before diving into configuring and running Teleport, it helps to take a look at -the [Teleport Architecture](/architecture) and review the key concepts this -document will be referring to: - -|Concept | Description -|----------|------------ -|Node | Synonym to "server" or "computer", something one can "SSH to". A node must be running the [ `teleport` ](../cli-docs/#teleport) daemon with "node" role/service turned on. -|Certificate Authority (CA) | A pair of public/private keys Teleport uses to manage access. A CA can sign a public key of a user or node, establishing their cluster membership. -|Teleport Cluster | A Teleport Auth Service contains two CAs. One is used to sign user keys and the other signs node keys. A collection of nodes connected to the same CA is called a "cluster". -|Cluster Name | Every Teleport cluster must have a name. If a name is not supplied via `teleport.yaml` configuration file, a GUID will be generated.**IMPORTANT:** renaming a cluster invalidates its keys and all certificates it had created. -|Trusted Cluster | Teleport Auth Service can allow 3rd party users or nodes to connect if their public keys are signed by a trusted CA. A "trusted cluster" is a pair of public keys of the trusted CA. It can be configured via `teleport.yaml` file. - -## Teleport Daemon - -The Teleport daemon is called [ `teleport` ](./cli-docs/#teleport) and it supports -the following commands: - -|Command | Description -|------------|------------------------------------------------------- -|start | Starts the Teleport daemon. -|configure | Dumps a sample configuration file in YAML format into standard output. -|version | Shows the Teleport version. -|status | Shows the status of a Teleport connection. This command is only available from inside of an active SSH session. -|help | Shows help. - -When experimenting, you can quickly start [ `teleport` ](../cli-docs/#teleport) -with verbose logging by typing [ `teleport start -d` ](./cli-docs/#teleport-start) -. +When experimenting, you can quickly start `teleport` with verbose logging by typing `teleport start -d`. !!! danger "WARNING" Teleport stores data in `/var/lib/teleport` . Make sure that regular/non-admin users do not have access to this folder on the Auth server. -### Systemd Unit File - -In production, we recommend starting teleport daemon via an init system like -`systemd` . Here's the recommended Teleport service unit file for systemd: - -``` yaml -[Unit] -Description=Teleport SSH Service -After=network.target - -[Service] -Type=simple -Restart=on-failure -ExecStart=/usr/local/bin/teleport start --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid -ExecReload=/bin/kill -HUP $MAINPID -PIDFile=/var/run/teleport.pid - -[Install] -WantedBy=multi-user.target -``` - -### Graceful Restarts - -If using the systemd service unit file above, executing `systemctl reload -teleport` will perform a graceful restart, i.e.the Teleport daemon will fork a -new process to handle new incoming requests, leaving the old daemon process -running until existing clients disconnect. - -!!! warning "Version warning": - Graceful restarts only work if Teleport is - deployed using network-based storage like DynamoDB or etcd 3.3+. Future - versions of Teleport will not have this limitation. - -You can also perform restarts/upgrades by sending `kill` signals to a Teleport -daemon manually. - -| Signal | Teleport Daemon Behavior -|-------------------------|--------------------------------------- -| `USR1` | Dumps diagnostics/debugging information into syslog. -| `TERM` , `INT` or `KILL` | Immediate non-graceful shutdown. All existing connections will be dropped. -| `USR2` | Forks a new Teleport daemon to serve new connections. -| `HUP` | Forks a new Teleport daemon to serve new connections **and** initiates the graceful shutdown of the existing process when there are no more clients connected to it. - -### Ports - -Teleport services listen on several ports. This table shows the default port -numbers. - -|Port | Service | Description -|----------|------------|------------------------------------------- -|3022 | Node | SSH port. This is Teleport's equivalent of port `#22` for SSH. -|3023 | Proxy | SSH port clients connect to. A proxy will forward this connection to port `#3022` on the destination node. -|3024 | Proxy | SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server. -|3025 | Auth | SSH port used by the Auth Service to serve its API to other nodes in a cluster. -|3080 | Proxy | HTTPS connection to authenticate `tsh` users and web users into the cluster. The same connection is used to serve a Web UI. -|3026 | Kubernetes Proxy | HTTPS Kubernetes proxy (if enabled) - ### Filesystem Layout By default, a Teleport node has the following files present. The location of all @@ -837,9 +751,9 @@ dijkstra c9s93fd9-3333-91d3-9999-c9s93fd98f43 10.1.0.6:3022 distro ### Untrusted Auth Servers Teleport nodes use the HTTPS protocol to offer the join tokens to the auth -server running on `10.0.10.5` in the example above. In a zero-trust environment, -you must assume that an attacker can highjack the IP address of the auth server -e.g. `10.0.10.5` . +server running on `10.0.10.5` in the example above. In a zero-trust +environment, you must assume that an attacker can highjack the IP address of +the auth server e.g. `10.0.10.5`. To prevent this from happening, you need to supply every new node with an additional bit of information about the auth server. This technique is called @@ -930,9 +844,7 @@ application of arbitrary key:value pairs to each node, called labels. There are two kinds of labels: 1. `static labels` do not change over time, while - [ `teleport` ](../cli-docs/#teleport) process is running. - Examples of static labels are physical location of nodes, name of the environment (staging vs production), etc. @@ -986,8 +898,8 @@ ssh_service: must be set) which also includes shell scripts with a proper [shebang line](https://en.wikipedia.org/wiki/Shebang_(Unix)). -**Important:** notice that `command` setting is an array where the first element -is a valid executable and each subsequent element is an argument, i.e: +**Important:** notice that `command` setting is an array where the first element is +a valid executable and each subsequent element is an argument, i.e: ``` yaml # valid syntax: @@ -1785,10 +1697,9 @@ $ cat cluster_node_keys @cert-authority *.graviton-auth ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLNduBoHQaqi+kgkq3gLYjc6JIyBBnCFLgm63b5rtmWl/CJD7T9HWHxZphaS1jra6CWdboLeTp6sDUIKZ/Qw1MKFlfoqZZ8k6to43bxx7DvAHs0Te4WpuS/YRmWFhb6mMVOa8Rd4/9jE+c0f9O/t7X4m5iR7Fp7Tt+R/pjJfr03Loi6TYP/61AgXD/BkVDf+IcU4+9nknl+kaVPSGcPS9/Vbni1208Q+VN7B7Umy71gCh02gfv3rBGRgjT/cRAivuVoH/z3n5UwWg+9R3GD/l+XZKgv+pfe3OHoyDFxYKs9JaX0+GWc504y3Grhos12Lb8sNmMngxxxQ/KUDOV9z+R type=host ``` -!!! tip "Note": - When sharing the @cert-authority make sure that the URL for the - proxy is correct. In the above example, `*.graviton-auth` should be changed to - teleport.example.com. +!!! tip "Note": When sharing the @cert-authority make sure that the URL for the + proxy is correct. In the above example, `*.graviton-auth` should be changed + to teleport.example.com. On your client machine, you need to import these keys. It will allow your OpenSSH client to verify that host's certificates are signed by the trusted CA @@ -2347,9 +2258,7 @@ clients, etc), the following rules apply: upgrade to 3.4 first. * Teleport clients ( [ `tsh` ](../cli-docs/#tsh) for users and - [ `tctl` ](../cli-docs/#tctl) for admins) may not be compatible - if older than the auth or the proxy server. They will print an error if there is an incompatibility. From 2433a04a390de572b136f9349d94fef5bb0550ab Mon Sep 17 00:00:00 2001 From: Heather Young Date: Tue, 15 Oct 2019 14:18:12 +0300 Subject: [PATCH 14/23] saving the file would help --- docs/4.1/architecture/nodes.md | 4 ++++ docs/4.1/installation.md | 2 +- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/4.1/architecture/nodes.md b/docs/4.1/architecture/nodes.md index 9820324364179..d44028c1d9d9c 100644 --- a/docs/4.1/architecture/nodes.md +++ b/docs/4.1/architecture/nodes.md @@ -121,4 +121,8 @@ file. * [Teleport Users](./users) * [Teleport Auth](./auth) * [Teleport Proxy](./proxy) +<<<<<<< HEAD:docs/4.1/architecture/nodes.md +======= +* [Architecture](./architecture) +>>>>>>> saving the file would help:docs/4.1/concepts/nodes.md diff --git a/docs/4.1/installation.md b/docs/4.1/installation.md index b422ca21da4b0..f1c66b3d53cdd 100644 --- a/docs/4.1/installation.md +++ b/docs/4.1/installation.md @@ -165,4 +165,4 @@ $ sudo chown $USER /var/lib/teleport If the build succeeds the binaries `teleport, tsh`, and `tctl` are now in the directory `$GOPATH/src/github.com/gravitational/teleport/build` - \ No newline at end of file + From 5f9d088a16ba76739486ba3b118952a1ec787021 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Wed, 16 Oct 2019 23:55:54 +0300 Subject: [PATCH 15/23] fix conflict in nodes guide --- docs/4.1/architecture/nodes.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/docs/4.1/architecture/nodes.md b/docs/4.1/architecture/nodes.md index d44028c1d9d9c..02d78035afdb0 100644 --- a/docs/4.1/architecture/nodes.md +++ b/docs/4.1/architecture/nodes.md @@ -121,8 +121,3 @@ file. * [Teleport Users](./users) * [Teleport Auth](./auth) * [Teleport Proxy](./proxy) -<<<<<<< HEAD:docs/4.1/architecture/nodes.md - -======= -* [Architecture](./architecture) ->>>>>>> saving the file would help:docs/4.1/concepts/nodes.md From 50ac764787a66404d0ebd85a3077ca3640591a1c Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 16:53:42 +0300 Subject: [PATCH 16/23] move content from admin to prod guide --- docs/4.1/admin-guide.md | 100 ++------------- docs/4.1/guides/production.md | 120 ------------------ docs/4.1/production.md | 221 ++++++++++++++++++++++++++++++++++ 3 files changed, 230 insertions(+), 211 deletions(-) delete mode 100644 docs/4.1/guides/production.md create mode 100644 docs/4.1/production.md diff --git a/docs/4.1/admin-guide.md b/docs/4.1/admin-guide.md index 628f6c505d806..e793d1bd499a8 100644 --- a/docs/4.1/admin-guide.md +++ b/docs/4.1/admin-guide.md @@ -1,87 +1,5 @@ # Teleport Admin Manual -This manual covers the installation and configuration of Teleport and the -ongoing management of a Teleport cluster. It assumes that the reader has good -understanding of Linux administration. - -## Installing - -To install, download the official binaries from the [Teleport -Downloads](https://gravitational.com/teleport/download/) section on our web site -and run: - -``` -$ tar -xzf teleport-binary-release.tar.gz -$ sudo make install -``` - -### Installing from Source - -Gravitational Teleport is written in Go language. It requires Golang v1.8.3 or -newer. - -``` bash -# get the source & build: -$ mkdir -p $GOPATH/src/github.com/gravitational -$ cd $GOPATH/src/github.com/gravitational -$ git clone https://github.com/gravitational/teleport.git -$ cd teleport -$ make full - -# create the default data directory before starting: -$ sudo mkdir -p /var/lib/teleport -``` - -### Teleport Checksum - -Gravitational Teleport provides a checksum from the Downloads page. This can be -used to verify the integrity of our binary. - -![Teleport Checksum](img/teleport-sha.png) - -**Checking Checksum on Mac OS** - -``` bash -$ shasum -a 256 teleport-v4.0.8-darwin-amd64-bin.tar.gz -0826a17b440ac20d4c38ade3d0a5eb1c62a00c4d5eb88e60b5ea627d426aaed2 teleport-v4.0.8-darwin-amd64-bin.tar.gz -``` - -**Checking Checksum on Linux** - -``` bash -$ sha256sum teleport-v4.0.8-darwin-amd64-bin.tar.gz -0826a17b440ac20d4c38ade3d0a5eb1c62a00c4d5eb88e60b5ea627d426aaed2 teleport-v4.0.8-darwin-amd64-bin.tar.gz -``` - -**Checking Checksum on Automated Systems** - -If you download Teleport via an automated system, you can programmatically -obtain the checksum by adding `.sha256` to the binary. - -``` bash -$ curl https://get.gravitational.com/teleport-v4.0.8-darwin-amd64-bin.tar.gz.sha256 -0826a17b440ac20d4c38ade3d0a5eb1c62a00c4d5eb88e60b5ea627d426aaed2 teleport-v4.0.8-darwin-amd64-bin.tar.gz -``` - -When experimenting, you can quickly start `teleport` with verbose logging by typing `teleport start -d`. - -!!! danger "WARNING" - Teleport stores data in `/var/lib/teleport` . Make sure that - regular/non-admin users do not have access to this folder on the Auth - server. - -### Filesystem Layout - -By default, a Teleport node has the following files present. The location of all -of them is configurable. - -| Full path | Purpose | -|---------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `/etc/teleport.yaml` | Teleport configuration file (optional).| -| `/usr/local/bin/teleport` | Teleport daemon binary.| -| `/usr/local/bin/tctl` | Teleport admin tool. It is only needed for auth servers.| -| `/var/lib/teleport` | Teleport data directory. Nodes keep their keys and certificates there. Auth servers store the audit log and the cluster keys there, but the audit log storage can be further configured via `auth_service` section in the config file.| - ## Configuration You should use a [configuration file](#configuration-file) to configure the @@ -751,9 +669,9 @@ dijkstra c9s93fd9-3333-91d3-9999-c9s93fd98f43 10.1.0.6:3022 distro ### Untrusted Auth Servers Teleport nodes use the HTTPS protocol to offer the join tokens to the auth -server running on `10.0.10.5` in the example above. In a zero-trust -environment, you must assume that an attacker can highjack the IP address of -the auth server e.g. `10.0.10.5`. +server running on `10.0.10.5` in the example above. In a zero-trust environment, +you must assume that an attacker can highjack the IP address of the auth server +e.g. `10.0.10.5` . To prevent this from happening, you need to supply every new node with an additional bit of information about the auth server. This technique is called @@ -849,7 +767,6 @@ two kinds of labels: environment (staging vs production), etc. 2. `dynamic labels` also known as "label commands" allow to generate labels at - runtime. Teleport will execute an external command on a node at a configurable frequency and the output of a command becomes the label value. Examples include reporting load averages, presence of a process, time after @@ -898,8 +815,8 @@ ssh_service: must be set) which also includes shell scripts with a proper [shebang line](https://en.wikipedia.org/wiki/Shebang_(Unix)). -**Important:** notice that `command` setting is an array where the first element is -a valid executable and each subsequent element is an argument, i.e: +**Important:** notice that `command` setting is an array where the first element +is a valid executable and each subsequent element is an argument, i.e: ``` yaml # valid syntax: @@ -1697,9 +1614,10 @@ $ cat cluster_node_keys @cert-authority *.graviton-auth ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLNduBoHQaqi+kgkq3gLYjc6JIyBBnCFLgm63b5rtmWl/CJD7T9HWHxZphaS1jra6CWdboLeTp6sDUIKZ/Qw1MKFlfoqZZ8k6to43bxx7DvAHs0Te4WpuS/YRmWFhb6mMVOa8Rd4/9jE+c0f9O/t7X4m5iR7Fp7Tt+R/pjJfr03Loi6TYP/61AgXD/BkVDf+IcU4+9nknl+kaVPSGcPS9/Vbni1208Q+VN7B7Umy71gCh02gfv3rBGRgjT/cRAivuVoH/z3n5UwWg+9R3GD/l+XZKgv+pfe3OHoyDFxYKs9JaX0+GWc504y3Grhos12Lb8sNmMngxxxQ/KUDOV9z+R type=host ``` -!!! tip "Note": When sharing the @cert-authority make sure that the URL for the - proxy is correct. In the above example, `*.graviton-auth` should be changed - to teleport.example.com. +!!! tip "Note": + When sharing the @cert-authority make sure that the URL for the + proxy is correct. In the above example, `*.graviton-auth` should be changed to + teleport.example.com. On your client machine, you need to import these keys. It will allow your OpenSSH client to verify that host's certificates are signed by the trusted CA diff --git a/docs/4.1/guides/production.md b/docs/4.1/guides/production.md deleted file mode 100644 index 5875ded0a891e..0000000000000 --- a/docs/4.1/guides/production.md +++ /dev/null @@ -1,120 +0,0 @@ -# Production Guide - -TODO Build off Quickstart, but include many more details and multi-node set up. - -Minimal Config example - -Include security considerations. Address vulns in quay? - -[TOC] - -## Prerequisites - -* Read about [Teleport Basics](../concepts/basics) -* Read through the [Installation Guide](../guides/installation) to see the available packages and binaries available. -* Read the CLI Docs for [`teleport`](../cli-docs/#teleport) - -## Designing Your Cluster - -Before installing anything there are a few things you should think about. - -* Where will you host Teleport - * On-premises - * Cloud VMs such as AWS EC2 or GCE - * An existing Kubernetes Cluster -* What does your existing network configuration look like? - * Are you able to administer the network firewall rules yourself or do you need to work with a network admin? - * Are these nodes accessible to the public Internet or behind NAT? -* Which users ([Roles or ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) on k8s) are set up on the existing system? - * Can you add new users or Roles yourself or do you need to work with a system admin? - -## Firewall Configuration - -Teleport services listen on several ports. This table shows the default port numbers. - -|Port | Service | Description | Ingress | Egress -|----------|------------|-------------|---------|---------- -| 3080 | Proxy | HTTPS port clients connect to. Used to authenticate `tsh` users and web users into the cluster. | Allow inbound connections from HTTP and SSH clients.| Allow outbound connections to HTTP and SSH clients. -| 3023 | Proxy | SSH port clients connect to after authentication. A proxy will forward this connection to port `3022` on the destination node. | Allow inbound traffic from SSH clients. | Allow outbound traffic to SSH clients. -| 3022 | Node | SSH port to the Node Service. This is Teleport's equivalent of port `22` for SSH. | Allow inbound traffic from proxy host. | Allow outbound traffic to the proxy host. -| 3025 | Auth | SSH port used by the Auth Service to serve its Auth API to other nodes in a cluster. | Allow inbound connections from all cluster nodes. | Allow outbound traffic to cluster nodes. -| 3024 | Proxy | SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server. | | - - - - -## Installation - -First - -## Running Teleport in Production - -### Systemd Unit File - -In production, we recommend starting teleport daemon via an init system like -`systemd`. If systemd and unit files are new to you check out [this helpful guide](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). Here's the recommended Teleport service unit file for systemd. - - -```yaml -[Unit] -Description=Teleport SSH Service -After=network.target - -[Service] -Type=simple -Restart=on-failure -# Set the nodes roles with the `--roles` -# In most production environments you will not -# want to run all three roles on a single host -# proxy,auth,node is the default value if none is set -ExecStart=/usr/local/bin/teleport start --roles=auth --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid -ExecReload=/bin/kill -HUP $MAINPID -PIDFile=/var/run/teleport.pid - -[Install] -WantedBy=multi-user.target -``` - -There are a couple of important things to notice about this file: - -1. The start command in the unit file specifies `--config` as a file and there are very few flags passed to the `teleport` binary. Most of the configuration for Teleport should be done in the [configuration file](../configuration). - -2. The **ExecReload** command allows admins to run `systemctl reload teleport`. This will attempt to perform a graceful restart of _*but it only works if network-based backend storage like [DynamoDB](../configuration/#storage) or [etc 3.3](../configuration/#storage) is configured*_. Graceful Restarts will fork a new process to handle new incoming requests and leave the old daemon process running until existing clients disconnect. - -You can also perform restarts/upgrades by sending `kill` signals -to a Teleport daemon manually. - -| Signal | Teleport Daemon Behavior -|-------------------------|--------------------------------------- -| `USR1` | Dumps diagnostics/debugging information into syslog. -| `TERM`, `INT` or `KILL` | Immediate non-graceful shutdown. All existing connections will be dropped. -| `USR2` | Forks a new Teleport daemon to serve new connections. -| `HUP` | Forks a new Teleport daemon to serve new connections **and** initiates the graceful shutdown of the existing process when there are no more clients connected to it. - - - -This will copy Teleport binaries to `/usr/local/bin`. - -Let's start Teleport. First, create a directory for Teleport -to keep its data. By default it's `/var/lib/teleport`. Then start `teleport` daemon: - -```bash -$ sudo teleport start -``` - -!!! danger "WARNING": - Teleport stores data in `/var/lib/teleport`. Make sure that regular/non-admin users do not - have access to this folder on the Auth server. - - -If you are logged in as `root` you may want to create a new OS-level user first. On linux create a new user called `` with the following commands: -```bash -$ adduser -$ su -``` - -Security considerations on installing tctl under root or not - -!!! danger "WARNING": - Teleport stores data in `/var/lib/teleport`. Make sure that regular/non-admin users do not - have access to this folder on the Auth server.--> diff --git a/docs/4.1/production.md b/docs/4.1/production.md new file mode 100644 index 0000000000000..b4ba0fe8dadbd --- /dev/null +++ b/docs/4.1/production.md @@ -0,0 +1,221 @@ +# Production Guide + +Minimal Config example + +Include security considerations. Address vulns in quay? + +[TOC] + +## Prerequisites + +* Read the [Architecture Overview](../architecture/overview) +* Read through the [Installation Guide](../installation) to see the available packages and binaries available. +* Read the CLI Docs for [`teleport`](../cli-docs/#teleport) + +## Designing Your Cluster + +Before installing anything there are a few things you should think about. + +* Where will you host Teleport? + * On-premises + * Cloud VMs such as AWS EC2 or GCE + * An existing Kubernetes Cluster +* What does your existing network configuration look like? + * Are you able to administer the network firewall rules yourself or do you need to work with a network admin? + * Are these nodes accessible to the public Internet or behind NAT? +* Which users ([Roles or ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) on k8s) are set up on the existing system? + * Can you add new users or Roles yourself or do you need to work with a system admin? + +## Firewall Configuration + +Teleport services listen on several ports. This table shows the default port numbers. + +|Port | Service | Description | Ingress | Egress +|----------|------------|-------------|---------|---------- +| 3080 | Proxy | HTTPS port clients connect to. Used to authenticate `tsh` users and web users into the cluster. | Allow inbound connections from HTTP and SSH clients.| Allow outbound connections to HTTP and SSH clients. +| 3023 | Proxy | SSH port clients connect to after authentication. A proxy will forward this connection to port `3022` on the destination node. | Allow inbound traffic from SSH clients. | Allow outbound traffic to SSH clients. +| 3022 | Node | SSH port to the Node Service. This is Teleport's equivalent of port `22` for SSH. | Allow inbound traffic from proxy host. | Allow outbound traffic to the proxy host. +| 3025 | Auth | SSH port used by the Auth Service to serve its Auth API to other nodes in a cluster. | Allow inbound connections from all cluster nodes. | Allow outbound traffic to cluster nodes. +| 3024 | Proxy | SSH port used to create "reverse SSH tunnels" from behind-firewall environments into a trusted proxy server. | | + + + +## Installation + +We have a detailed [installation guide](../installation) which shows how to +install all available binaries or [install from +source](#../installation/#installing-from-source). Reference that guide to learn +the best way to install Teleport for your system and the come back here to +finish your production install. + +### Filesystem Layout + +By default a Teleport node has the following files present. The location of all +of them is configurable. + +| Default path | Purpose | +|------------------------------|----------| +| `/etc/teleport.yaml` | Teleport configuration file.| +| `/usr/local/bin/teleport` | Teleport daemon binary.| +| `/usr/local/bin/tctl` | Teleport admin tool. It is only needed for auth servers.| +| `/usr/local/bin/tsh` | Teleport CLI client tool. It is needed on any node that needs to connect to the cluster.| +| `/var/lib/teleport` | Teleport data directory. Nodes keep their keys and certificates there. Auth servers store the audit log and the cluster keys there, but the audit log storage can be further configured via `auth_service` section in the config file.| + +## Running Teleport in Production + +### Systemd Unit File + +In production, we recommend starting teleport daemon via an init system like +`systemd`. If systemd and unit files are new to you check out [this helpful guide](https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files). Here's the recommended Teleport service unit file for systemd. + + +```yaml +[Unit] +Description=Teleport SSH Service +After=network.target + +[Service] +Type=simple +Restart=on-failure +# Set the nodes roles with the `--roles` +# In most production environments you will not +# want to run all three roles on a single host +# --roles='proxy,auth,node' is the default value +# if none is set +ExecStart=/usr/local/bin/teleport start --roles=auth --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid +ExecReload=/bin/kill -HUP $MAINPID +PIDFile=/var/run/teleport.pid + +[Install] +WantedBy=multi-user.target +``` + +There are a couple of important things to notice about this file: + +1. The start command in the unit file specifies `--config` as a file and there + are very few flags passed to the `teleport` binary. Most of the configuration + for Teleport should be done in the [configuration file](../configuration). + +2. The **ExecReload** command allows admins to run `systemctl reload teleport`. + This will attempt to perform a graceful restart of _*but it only works if + network-based backend storage like [DynamoDB](../configuration/#storage) or + [etc 3.3](../configuration/#storage) is configured*_. Graceful Restarts will + fork a new process to handle new incoming requests and leave the old daemon + process running until existing clients disconnect. + +### Start the Teleport Service + +You can start Teleport as a Systemd Unit by enabling the `.service` file +with the `systemctl` tool. + +```bash +$ cd /etc/systemd/system +# Use your text editor of choice to create the .service file +# Here we use vim +$ vi teleport.service +# use the file above as is, or customize as needed +# save the file +$ systemctl enable teleport +$ systemctl start teleport +# show the status of the unit +$ systecmtl status teleport +# follow tail of service logs +$ journalctl -fu teleport +# If you modify teleport.service later you will need to +# reload the systemctl daemon and reload teleport +# to apply your changes +$ systemctl daemon-reload +$ systemctl reload teleport +``` + +You can also perform restarts or upgrades by sending `kill` signals +to a Teleport daemon manually. + +| Signal | Teleport Daemon Behavior +|-------------------------|--------------------------------------- +| `USR1` | Dumps diagnostics/debugging information into syslog. +| `TERM`, `INT` or `KILL` | Immediate non-graceful shutdown. All existing connections will be dropped. +| `USR2` | Forks a new Teleport daemon to serve new connections. +| `HUP` | Forks a new Teleport daemon to serve new connections **and** initiates the graceful shutdown of the existing process when there are no more clients connected to it. This is the signal sent to trigger a graceful restart. + +### Adding Nodes to the Cluster + +We've written a dedicated guide on [Adding Nodes to your +Cluster](./guides/node-join) which shows how to generate or set join tokens and +use them to add nodes. + +## Security Considerations + +### CA Pinning + +Teleport nodes use the HTTPS protocol to offer the join tokens to the auth +server. In a zero-trust environment, you must assume that an attacker can +highjack the IP address of the auth server. + +To prevent this from happening, you need to supply every new node with an +additional bit of information about the auth server. This technique is called +"CA Pinning". It works by asking the auth server to produce a "CA Pin", which +is a hashed value of it's private key, i.e. it cannot be forged by an attacker. + +To get the current CA Pin run this on the auth server: + +```bash +$ tctl status +Cluster staging.example.com +User CA never updated +Host CA never updated +CA pin sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 +``` + +The CA pin at the bottom needs to be passed to the new nodes when they're starting +for the first time, i.e. when they join a cluster: + +Via CLI: + +```bash +$ teleport start \ + --roles=node \ + --token=1ac590d36493acdaa2387bc1c492db1a \ + --ca-pin=sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 \ + --auth-server=10.12.0.6:3025 +``` + +or via `/etc/teleport.yaml` on a node: + +```yaml +teleport: + auth_token: "1ac590d36493acdaa2387bc1c492db1a" + ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" + auth_servers: + - "10.12.0.6:3025" +``` + +!!! warning "Warning": + If a CA pin not provided, Teleport node will join a cluster but it will + print a `WARN` message (warning) into it's standard error output. + +!!! warning "Warning": + The CA pin becomes invalid if a Teleport administrator performs the CA + rotation by executing `tctl auth rotate`. + +### Secure Data Storage + +By default the `teleport` daemon uses the local +directory `/var/lib/teleport` to store its data. This applies to any role or +service including Auth, Node, or Proxy. While an Auth node hosts the most +sensitive data you will want to prevent unauthorized access to this directory. +Make sure that regular/non-admin users do not have access to this folder, +particularly on the Auth server. Change the ownership of the directory with +[`chown`](https://linuxize.com/post/linux-chown-command/) + +```bash +$ sudo teleport start +``` + +If you are logged in as `root` you may want to create a new OS-level user first. On linux create a new user called `` with the following commands: +```bash +$ adduser +$ su +``` + +Security considerations on installing tctl under root or not From 714071ee264b42e61a1d2bbb82893e7f0098b33d Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:16:04 +0300 Subject: [PATCH 17/23] add wip configuration page, rm from admin guide --- docs/4.1.yaml | 23 +- docs/4.1/admin-guide.md | 474 -------------------------------------- docs/4.1/configuration.md | 445 +++++++++++++++++++++++++++++++++++ 3 files changed, 451 insertions(+), 491 deletions(-) create mode 100644 docs/4.1/configuration.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index 2b5f58d6ee265..bf37eff40b283 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -26,27 +26,16 @@ pages: - Documentation: - Introduction: intro.md - Quick Start Guide: quickstart.md + - Installation: installation.md + - Production: production.md + - CLI Docs: cli-docs.md + - YAML Configuration: configuration.md - User Manual: user-manual.md - Admin Manual: admin-guide.md - FAQ: faq.md - Guides: - - Quickstart: guides/quickstart.md - - Installation: guides/installation.md - - Production: guides/production.md - - Share Sessions: guides/session-sharing.md - - Replay Sessions: guides/audit-replay.md - - Manage User Permissions: guides/user-permissions.md - - Label Nodes: guides/node-labels.md - - Teleport with OpenSSH: guides/openssh.md - - Trusted Clusters: trustedclusters.md - - AWS: aws_oss.md - - Concepts: - - Teleport Basic Concepts: concepts/basics.md - - Teleport Users: concepts/users.md - - Teleport Nodes: concepts/nodes.md - - Teleport Auth: concepts/auth.md - - Teleport Proxy: concepts/proxy.md - - Architecture: concepts/architecture.md + # TODO: Add How-To Guide on Managing Nodes, Users, Trusted Clusters + # etc. any common task should have a guide - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md diff --git a/docs/4.1/admin-guide.md b/docs/4.1/admin-guide.md index e793d1bd499a8..6a24c7a1b5d36 100644 --- a/docs/4.1/admin-guide.md +++ b/docs/4.1/admin-guide.md @@ -1,479 +1,5 @@ # Teleport Admin Manual -## Configuration - -You should use a [configuration file](#configuration-file) to configure the -[ `teleport` ](../cli-docs/#teleport) daemon. For simple experimentation, you can -use command line flags with the [ `teleport start` ](./cli-docs/#teleport-start) -command. Read about all the allowed flags in the [CLI -Docs](./cli-docs/#teleport-start) or run `teleport start --help` - -### Configuration File - -Teleport uses the YAML file format for configuration. A sample configuration -file is shown below. By default, it is stored in `/etc/teleport.yaml` - -!!! note "IMPORTANT": - When editing YAML configuration, please pay attention to how your - editor handles white space. YAML requires consistent handling of - tab characters. - -``` yaml -# By default, this file should be stored in /etc/teleport.yaml - -# This section of the configuration file applies to all teleport -# services. -teleport: - # nodename allows to assign an alternative name this node can be reached by. - # by default it's equal to hostname - nodename: graviton - - # Data directory where Teleport daemon keeps its data. - # See "Filesystem Layout" section above for more details. - data_dir: /var/lib/teleport - - # Invitation token used to join a cluster. it is not used on - # subsequent starts - auth_token: xxxx-token-xxxx - - # Optional CA pin of the auth server. This enables more secure way of adding new - # nodes to a cluster. See "Adding Nodes" section above. - ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" - - # When running in multi-homed or NATed environments Teleport nodes need - # to know which IP it will be reachable at by other nodes - # - # This value can be specified as FQDN e.g. host.example.com - advertise_ip: 10.1.0.5 - - # list of auth servers in a cluster. you will have more than one auth server - # if you configure teleport auth to run in HA configuration. - # If adding a node located behind NAT, use the Proxy URL. e.g. - # auth_servers: - # - teleport-proxy.example.com:3080 - auth_servers: - - - 10.1.0.5:3025 - - 10.1.0.6:3025 - - # Teleport throttles all connections to avoid abuse. These settings allow - # you to adjust the default limits - connection_limits: - max_connections: 1000 - max_users: 250 - - # Logging configuration. Possible output values are 'stdout', 'stderr' and - # 'syslog'. Possible severity values are INFO, WARN and ERROR (default). - log: - output: stderr - severity: ERROR - - # Configuration for the storage back-end used for the cluster state and the - # audit log. Several back-end types are supported. See "High Availability" - # section of this Admin Manual below to learn how to configure DynamoDB, - # S3, etcd and other highly available back-ends. - storage: - # By default teleport uses the `data_dir` directory on a local filesystem - type: dir - - # Array of locations where the audit log events will be stored. by - # default they are stored in `/var/lib/teleport/log` - audit_events_uri: ['file:///var/lib/teleport/log', 'dynamodb://events_table_name', 'stdout://'] - - # Use this setting to configure teleport to store the recorded sessions in - # an AWS S3 bucket. see "Using Amazon S3" chapter for more information. - audit_sessions_uri: 's3://example.com/path/to/bucket?region=us-east-1' - - # Cipher algorithms that the server supports. This section only needs to be - # set if you want to override the defaults. - ciphers: - - - aes128-ctr - - aes192-ctr - - aes256-ctr - - aes128-gcm@openssh.com - - chacha20-poly1305@openssh.com - - # Key exchange algorithms that the server supports. This section only needs - # to be set if you want to override the defaults. - kex_algos: - - - curve25519-sha256@libssh.org - - ecdh-sha2-nistp256 - - ecdh-sha2-nistp384 - - ecdh-sha2-nistp521 - - # Message authentication code (MAC) algorithms that the server supports. - # This section only needs to be set if you want to override the defaults. - mac_algos: - - - hmac-sha2-256-etm@openssh.com - - hmac-sha2-256 - - # List of the supported ciphersuites. If this section is not specified, - # only the default ciphersuites are enabled. - ciphersuites: - - tls-rsa-with-aes-128-gcm-sha256 - - tls-rsa-with-aes-256-gcm-sha384 - - tls-ecdhe-rsa-with-aes-128-gcm-sha256 - - tls-ecdhe-ecdsa-with-aes-128-gcm-sha256 - - tls-ecdhe-rsa-with-aes-256-gcm-sha384 - - tls-ecdhe-ecdsa-with-aes-256-gcm-sha384 - - tls-ecdhe-rsa-with-chacha20-poly1305 - - tls-ecdhe-ecdsa-with-chacha20-poly1305 - -# This section configures the 'auth service': -auth_service: - # Turns 'auth' role on. Default is 'yes' - enabled: yes - - # A cluster name is used as part of a signature in certificates - # generated by this CA. - # - # We strongly recommend to explicitly set it to something meaningful as it - # becomes important when configuring trust between multiple clusters. - # - # By default an automatically generated name is used (not recommended) - # - # IMPORTANT: if you change cluster_name, it will invalidate all generated - # certificates and keys (may need to wipe out /var/lib/teleport directory) - cluster_name: "main" - - authentication: - # default authentication type. possible values are 'local', 'oidc' and 'saml' - # only local authentication (Teleport's own user DB) is supported in the open - # source version - type: local - # second_factor can be off, otp, or u2f - second_factor: otp - # this section is used if second_factor is set to 'u2f' - u2f: - # app_id must point to the URL of the Teleport Web UI (proxy) accessible - # by the end users - app_id: https://localhost:3080 - # facets must list all proxy servers if there are more than one deployed - facets: - - - https://localhost:3080 - - # IP and the port to bind to. Other Teleport nodes will be connecting to - # this port (AKA "Auth API" or "Cluster API") to validate client - # certificates - listen_addr: 0.0.0.0:3025 - - # The optional DNS name the auth server if located behind a load balancer. - # (see public_addr section below) - public_addr: auth.example.com:3025 - - # Pre-defined tokens for adding new nodes to a cluster. Each token specifies - # the role a new node will be allowed to assume. The more secure way to - # add nodes is to use `ttl node add --ttl` command to generate auto-expiring - # tokens. - # - # We recommend to use tools like `pwgen` to generate sufficiently random - # tokens of 32+ byte length. - tokens: - - - "proxy,node:xxxxx" - - "auth:yyyy" - - # Optional setting for configuring session recording. Possible values are: - # "node" : sessions will be recorded on the node level (the default) - # "proxy" : recording on the proxy level, see "recording proxy mode" section. - # "off" : session recording is turned off - session_recording: "node" - - # This setting determines if a Teleport proxy performs strict host key checks. - # Only applicable if session_recording=proxy, see "recording proxy mode" for details. - proxy_checks_host_keys: yes - - # Determines if SSH sessions to cluster nodes are forcefully terminated - # after no activity from a client (idle client). - # Examples: "30m", "1h" or "1h30m" - client_idle_timeout: never - - # Determines if the clients will be forcefully disconnected when their - # certificates expire in the middle of an active SSH session. (default is 'no') - disconnect_expired_cert: no - - # Determines the interval at which Teleport will send keep-alive messages. The - # default value mirrors sshd at 15 minutes. keep_alive_count_max is the number - # of missed keep-alive messages before the server tears down the connection to the - # client. - keep_alive_interval: 15 - keep_alive_count_max: 3 - - # License file to start auth server with. Note that this setting is ignored - # in open-source Teleport and is required only for Teleport Pro, Business - # and Enterprise subscription plans. - # - # The path can be either absolute or relative to the configured `data_dir` - # and should point to the license file obtained from Teleport Download Portal. - # - # If not set, by default Teleport will look for the `license.pem` file in - # the configured `data_dir` . - license_file: /var/lib/teleport/license.pem - - # DEPRECATED in Teleport 3.2 (moved to proxy_service section) - kubeconfig_file: /path/to/kubeconfig - -# This section configures the 'node service': -ssh_service: - # Turns 'ssh' role on. Default is 'yes' - enabled: yes - - # IP and the port for SSH service to bind to. - listen_addr: 0.0.0.0:3022 - - # The optional public address the SSH service. This is useful if administrators - # want to allow users to connect to nodes directly, bypassing a Teleport proxy - # (see public_addr section below) - public_addr: node.example.com:3022 - - # See explanation of labels in "Labeling Nodes" section below - labels: - role: master - type: postgres - - # List of the commands to periodically execute. Their output will be used as node labels. - # See "Labeling Nodes" section below for more information and more examples. - commands: - # this command will add a label 'arch=x86_64' to a node - - - name: arch - - command: ['/bin/uname', '-p'] - period: 1h0m0s - - # enables reading ~/.tsh/environment before creating a session. by default - # set to false, can be set true here or as a command line flag. - permit_user_env: false - - # configures PAM integration. see below for more details. - pam: - enabled: no - service_name: teleport - -# This section configures the 'proxy service' -proxy_service: - # Turns 'proxy' role on. Default is 'yes' - enabled: yes - - # SSH forwarding/proxy address. Command line (CLI) clients always begin their - # SSH sessions by connecting to this port - listen_addr: 0.0.0.0:3023 - - # Reverse tunnel listening address. An auth server (CA) can establish an - # outbound (from behind the firewall) connection to this address. - # This will allow users of the outside CA to connect to behind-the-firewall - # nodes. - tunnel_listen_addr: 0.0.0.0:3024 - - # The HTTPS listen address to serve the Web UI and also to authenticate the - # command line (CLI) users via password+HOTP - web_listen_addr: 0.0.0.0:3080 - - # The DNS name the proxy HTTPS endpoint as accessible by cluster users. - # Defaults to the proxy's hostname if not specified. If running multiple - # proxies behind a load balancer, this name must point to the load balancer - # (see public_addr section below) - public_addr: proxy.example.com:3080 - - # The DNS name of the proxy SSH endpoint as accessible by cluster clients. - # Defaults to the proxy's hostname if not specified. If running multiple proxies - # behind a load balancer, this name must point to the load balancer. - # Use a TCP load balancer because this port uses SSH protocol. - ssh_public_addr: proxy.example.com:3023 - - # TLS certificate for the HTTPS connection. Configuring these properly is - # critical for Teleport security. - https_key_file: /var/lib/teleport/webproxy_key.pem - https_cert_file: /var/lib/teleport/webproxy_cert.pem - - # This section configures the Kubernetes proxy service - kubernetes: - # Turns 'kubernetes' proxy on. Default is 'no' - enabled: yes - - # Kubernetes proxy listen address. - listen_addr: 0.0.0.0:3026 - - # The DNS name of the Kubernetes proxy server that is accessible by cluster clients. - # If running multiple proxies behind a load balancer, this name must point to the - # load balancer. - public_addr: ['kube.example.com:3026'] - - # This setting is not required if the Teleport proxy service is - # deployed inside a Kubernetes cluster. Otherwise, Teleport proxy - # will use the credentials from this file: - kubeconfig_file: /path/to/kube/config -``` - -#### Public Addr - -Notice that all three Teleport services (proxy, auth, node) have an optional -`public_addr` property. The public address can take an IP or a DNS name. It can -also be a list of values: - -``` yaml -public_addr: ["proxy-one.example.com", "proxy-two.example.com"] -``` - -Specifying a public address for a Teleport service may be useful in the -following use cases: - -* You have multiple identical services, like proxies, behind a load balancer. -* You want Teleport to issue SSH certificate for the service with the additional - - principals, e.g.host names. - -## Authentication - -Teleport uses the concept of "authentication connectors" to authenticate users -when they execute [ `tsh login` ](../cli-docs/#tsh-login) command. There are three -types of authentication connectors: - -### Local Connector - -Local authentication is used to authenticate against a local Teleport user -database. This database is managed by [ `tctl users` ](./cli-docs/#tctl-users) -command. Teleport also supports second factor authentication (2FA) for the local -connector. There are three possible values (types) of 2FA: - - + `otp` is the default. It implements - - [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) - standard. You can use [Google - Authenticator](https://en.wikipedia.org/wiki/Google_Authenticator) or - [Authy](https://www.authy.com/) or any other TOTP client. - - + `u2f` implements [U2F](https://en.wikipedia.org/wiki/Universal_2nd_Factor) - - standard for utilizing hardware (USB) keys for second factor. - - + `off` turns off second factor authentication. - -Here is an example of this setting in the `teleport.yaml` : - -``` yaml -auth_service: - authentication: - type: local - second_factor: off -``` - -### Github OAuth 2.0 Connector - -This connector implements Github OAuth 2.0 authentication flow. Please refer to -Github documentation on [Creating an OAuth -App](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) -to learn how to create and register an OAuth app. - -Here is an example of this setting in the `teleport.yaml` : - -``` yaml -auth_service: - authentication: - type: github -``` - -See [Github OAuth 2.0](#github-oauth-20) for details on how to configure it. - -### SAML - -This connector type implements SAML authentication. It can be configured against -any external identity manager like Okta or Auth0. This feature is only available -for Teleport Enterprise. - -Here is an example of this setting in the `teleport.yaml` : - -``` yaml -auth_service: - authentication: - type: saml -``` - -### OIDC - -Teleport implements OpenID Connect (OIDC) authentication, which is similar to -SAML in principle. This feature is only available for Teleport Enterprise. - -Here is an example of this setting in the `teleport.yaml` : - -``` yaml -auth_service: - authentication: - type: oidc -``` - -### FIDO U2F - -Teleport supports [FIDO U2F](https://www.yubico.com/about/background/fido/) -hardware keys as a second authentication factor. By default U2F is disabled. To -start using U2F: - -* Enable U2F in Teleport configuration `/etc/teleport.yaml` . -* For CLI-based logins you have to install - - [u2f-host](https://developers.yubico.com/libu2f-host/) utility. - -* For web-based logins you have to use Google Chrome, as it is the only browser - - supporting U2F at this time. - -``` yaml -# snippet from /etc/teleport.yaml to show an example configuration of U2F: -auth_service: - authentication: - type: local - second_factor: u2f - # this section is needed only if second_factor is set to 'u2f' - u2f: - # app_id must point to the URL of the Teleport Web UI (proxy) accessible - # by the end users - app_id: https://localhost:3080 - # facets must list all proxy servers if there are more than one deployed - facets: - - https://localhost:3080 -``` - -For single-proxy setups, the `app_id` setting can be equal to the domain name of -the proxy, but this will prevent you from adding more proxies without changing -the `app_id` . For multi-proxy setups, the `app_id` should be an HTTPS URL -pointing to a JSON file that mirrors `facets` in the auth config. - -!!! warning "Warning": - The `app_id` must never change in the lifetime of the - cluster. If the App ID changes, all existing U2F key registrations will - become invalid and all users who use U2F as the second factor will need to - re-register. When adding a new proxy server, make sure to add it to the list - of "facets" in the configuration file, but also to the JSON file referenced - by `app_id` - -**Logging in with U2F** - -For logging in via the CLI, you must first install -[u2f-host](https://developers.yubico.com/libu2f-host/). Installing: - -``` yaml -# OSX: -$ brew install libu2f-host - -# Ubuntu 16.04 LTS: -$ apt-get install u2f-host -``` - -Then invoke `tsh ssh` as usual to authenticate: - -``` -tsh --proxy ssh -``` - -!!! tip "Version Warning": External user identities are only supported in - [Teleport Enterprise](/enterprise/). Please reach out to - -`sales@gravitational.com` for more information. - ## Adding and Deleting Users This section covers internal user identities, i.e.user accounts created and diff --git a/docs/4.1/configuration.md b/docs/4.1/configuration.md new file mode 100644 index 0000000000000..cff9603217cb4 --- /dev/null +++ b/docs/4.1/configuration.md @@ -0,0 +1,445 @@ +# YAML Configuration + +### Configuration File + +Teleport uses the YAML file format for configuration. A sample configuration file is shown below. By default, it is stored in `/etc/teleport.yaml` + +!!! note "IMPORTANT": + When editing YAML configuration, please pay attention to how your editor + handles white space. YAML requires consistent handling of tab characters. + +```yaml +# By default, this file should be stored in /etc/teleport.yaml + +# This section of the configuration file applies to all teleport +# services. +teleport: + # nodename allows to assign an alternative name this node can be reached by. + # by default it's equal to hostname + nodename: graviton + + # Data directory where Teleport daemon keeps its data. + # See "Filesystem Layout" section above for more details. + data_dir: /var/lib/teleport + + # Invitation token used to join a cluster. it is not used on + # subsequent starts + auth_token: xxxx-token-xxxx + + # Optional CA pin of the auth server. This enables more secure way of adding new + # nodes to a cluster. See "Adding Nodes" section above. + ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" + + # When running in multi-homed or NATed environments Teleport nodes need + # to know which IP it will be reachable at by other nodes + # + # This value can be specified as FQDN e.g. host.example.com + advertise_ip: 10.1.0.5 + + # list of auth servers in a cluster. you will have more than one auth server + # if you configure teleport auth to run in HA configuration. + # If adding a node located behind NAT, use the Proxy URL. e.g. + # auth_servers: + # - teleport-proxy.example.com:3080 + auth_servers: + - 10.1.0.5:3025 + - 10.1.0.6:3025 + + # Teleport throttles all connections to avoid abuse. These settings allow + # you to adjust the default limits + connection_limits: + max_connections: 1000 + max_users: 250 + + # Logging configuration. Possible output values are 'stdout', 'stderr' and + # 'syslog'. Possible severity values are INFO, WARN and ERROR (default). + log: + output: stderr + severity: ERROR + + # Configuration for the storage back-end used for the cluster state and the + # audit log. Several back-end types are supported. See "High Availability" + # section of this Admin Manual below to learn how to configure DynamoDB, + # S3, etcd and other highly available back-ends. + storage: + # By default teleport uses the `data_dir` directory on a local filesystem + type: dir + + # Array of locations where the audit log events will be stored. by + # default they are stored in `/var/lib/teleport/log` + audit_events_uri: ['file:///var/lib/teleport/log', 'dynamodb://events_table_name', 'stdout://'] + + # Use this setting to configure teleport to store the recorded sessions in + # an AWS S3 bucket. see "Using Amazon S3" chapter for more information. + audit_sessions_uri: 's3://example.com/path/to/bucket?region=us-east-1' + + # Cipher algorithms that the server supports. This section only needs to be + # set if you want to override the defaults. + ciphers: + - aes128-ctr + - aes192-ctr + - aes256-ctr + - aes128-gcm@openssh.com + - chacha20-poly1305@openssh.com + + # Key exchange algorithms that the server supports. This section only needs + # to be set if you want to override the defaults. + kex_algos: + - curve25519-sha256@libssh.org + - ecdh-sha2-nistp256 + - ecdh-sha2-nistp384 + - ecdh-sha2-nistp521 + + # Message authentication code (MAC) algorithms that the server supports. + # This section only needs to be set if you want to override the defaults. + mac_algos: + - hmac-sha2-256-etm@openssh.com + - hmac-sha2-256 + + # List of the supported ciphersuites. If this section is not specified, + # only the default ciphersuites are enabled. + ciphersuites: + - tls-rsa-with-aes-128-gcm-sha256 + - tls-rsa-with-aes-256-gcm-sha384 + - tls-ecdhe-rsa-with-aes-128-gcm-sha256 + - tls-ecdhe-ecdsa-with-aes-128-gcm-sha256 + - tls-ecdhe-rsa-with-aes-256-gcm-sha384 + - tls-ecdhe-ecdsa-with-aes-256-gcm-sha384 + - tls-ecdhe-rsa-with-chacha20-poly1305 + - tls-ecdhe-ecdsa-with-chacha20-poly1305 + + +# This section configures the 'auth service': +auth_service: + # Turns 'auth' role on. Default is 'yes' + enabled: yes + + # A cluster name is used as part of a signature in certificates + # generated by this CA. + # + # We strongly recommend to explicitly set it to something meaningful as it + # becomes important when configuring trust between multiple clusters. + # + # By default an automatically generated name is used (not recommended) + # Every Teleport cluster must have a name. If a name is not supplied via + # `teleport.yaml` configuration file, a GUID will be generated. + # **IMPORTANT:** renaming a cluster invalidates its keys and all + # certificates it had created. + # + # IMPORTANT: if you change cluster_name, it will invalidate all generated + # certificates and keys (may need to wipe out /var/lib/teleport directory) + cluster_name: "main" + + authentication: + # default authentication type. possible values are 'local', 'oidc' and 'saml' + # only local authentication (Teleport's own user DB) is supported in the open + # source version + type: local + # second_factor can be off, otp, or u2f + second_factor: otp + # this section is used if second_factor is set to 'u2f' + u2f: + # app_id must point to the URL of the Teleport Web UI (proxy) accessible + # by the end users + app_id: https://localhost:3080 + # facets must list all proxy servers if there are more than one deployed + facets: + - https://localhost:3080 + + # IP and the port to bind to. Other Teleport nodes will be connecting to + # this port (AKA "Auth API" or "Cluster API") to validate client + # certificates + listen_addr: 0.0.0.0:3025 + + # The optional DNS name the auth server if located behind a load balancer. + # (see public_addr section below) + public_addr: auth.example.com:3025 + + # Pre-defined tokens for adding new nodes to a cluster. Each token specifies + # the role a new node will be allowed to assume. The more secure way to + # add nodes is to use `ttl node add --ttl` command to generate auto-expiring + # tokens. + # + # We recommend to use tools like `pwgen` to generate sufficiently random + # tokens of 32+ byte length. + tokens: + - "proxy,node:xxxxx" + - "auth:yyyy" + + # Optional setting for configuring session recording. Possible values are: + # "node" : sessions will be recorded on the node level (the default) + # "proxy" : recording on the proxy level, see "recording proxy mode" section. + # "off" : session recording is turned off + session_recording: "node" + + # This setting determines if a Teleport proxy performs strict host key checks. + # Only applicable if session_recording=proxy, see "recording proxy mode" for details. + proxy_checks_host_keys: yes + + # Determines if SSH sessions to cluster nodes are forcefully terminated + # after no activity from a client (idle client). + # Examples: "30m", "1h" or "1h30m" + client_idle_timeout: never + + # Determines if the clients will be forcefully disconnected when their + # certificates expire in the middle of an active SSH session. (default is 'no') + disconnect_expired_cert: no + + # Determines the interval at which Teleport will send keep-alive messages. The + # default value mirrors sshd at 15 minutes. keep_alive_count_max is the number + # of missed keep-alive messages before the server tears down the connection to the + # client. + keep_alive_interval: 15 + keep_alive_count_max: 3 + + # License file to start auth server with. Note that this setting is ignored + # in open-source Teleport and is required only for Teleport Pro, Business + # and Enterprise subscription plans. + # + # The path can be either absolute or relative to the configured `data_dir` + # and should point to the license file obtained from Teleport Download Portal. + # + # If not set, by default Teleport will look for the `license.pem` file in + # the configured `data_dir`. + license_file: /var/lib/teleport/license.pem + + # DEPRECATED in Teleport 3.2 (moved to proxy_service section) + kubeconfig_file: /path/to/kubeconfig + +# This section configures the 'node service': +ssh_service: + # Turns 'ssh' role on. Default is 'yes' + enabled: yes + + # IP and the port for SSH service to bind to. + listen_addr: 0.0.0.0:3022 + + # The optional public address the SSH service. This is useful if administrators + # want to allow users to connect to nodes directly, bypassing a Teleport proxy + # (see public_addr section below) + public_addr: node.example.com:3022 + + # See explanation of labels in "Labeling Nodes" section below + labels: + role: master + type: postgres + + # List of the commands to periodically execute. Their output will be used as node labels. + # See "Labeling Nodes" section below for more information and more examples. + commands: + # this command will add a label 'arch=x86_64' to a node + - name: arch + command: ['/bin/uname', '-p'] + period: 1h0m0s + + # enables reading ~/.tsh/environment before creating a session. by default + # set to false, can be set true here or as a command line flag. + permit_user_env: false + + # configures PAM integration. see below for more details. + pam: + enabled: no + service_name: teleport + +# This section configures the 'proxy service' +proxy_service: + # Turns 'proxy' role on. Default is 'yes' + enabled: yes + + # SSH forwarding/proxy address. Command line (CLI) clients always begin their + # SSH sessions by connecting to this port + listen_addr: 0.0.0.0:3023 + + # Reverse tunnel listening address. An auth server (CA) can establish an + # outbound (from behind the firewall) connection to this address. + # This will allow users of the outside CA to connect to behind-the-firewall + # nodes. + tunnel_listen_addr: 0.0.0.0:3024 + + # The HTTPS listen address to serve the Web UI and also to authenticate the + # command line (CLI) users via password+HOTP + web_listen_addr: 0.0.0.0:3080 + + # The DNS name the proxy HTTPS endpoint as accessible by cluster users. + # Defaults to the proxy's hostname if not specified. If running multiple + # proxies behind a load balancer, this name must point to the load balancer + # (see public_addr section below) + public_addr: proxy.example.com:3080 + + # The DNS name of the proxy SSH endpoint as accessible by cluster clients. + # Defaults to the proxy's hostname if not specified. If running multiple proxies + # behind a load balancer, this name must point to the load balancer. + # Use a TCP load balancer because this port uses SSH protocol. + ssh_public_addr: proxy.example.com:3023 + + # TLS certificate for the HTTPS connection. Configuring these properly is + # critical for Teleport security. + https_key_file: /var/lib/teleport/webproxy_key.pem + https_cert_file: /var/lib/teleport/webproxy_cert.pem + + # This section configures the Kubernetes proxy service + kubernetes: + # Turns 'kubernetes' proxy on. Default is 'no' + enabled: yes + + # Kubernetes proxy listen address. + listen_addr: 0.0.0.0:3026 + + # The DNS name of the Kubernetes proxy server that is accessible by cluster clients. + # If running multiple proxies behind a load balancer, this name must point to the + # load balancer. + public_addr: ['kube.example.com:3026'] + + # This setting is not required if the Teleport proxy service is + # deployed inside a Kubernetes cluster. Otherwise, Teleport proxy + # will use the credentials from this file: + kubeconfig_file: /path/to/kube/config +``` + +#### Public Addr + +Notice that all three Teleport services (proxy, auth, node) have an optional +`public_addr` property. The public address can take an IP or a DNS name. +It can also be a list of values: + +```yaml +public_addr: ["proxy-one.example.com", "proxy-two.example.com"] +``` + +Specifying a public address for a Teleport service may be useful in the following use cases: + +* You have multiple identical services, like proxies, behind a load balancer. +* You want Teleport to issue SSH certificate for the service with the + additional principals, e.g. host names. + +## Authentication + +Teleport uses the concept of "authentication connectors" to authenticate users when +they execute `tsh login` command. There are three types of authentication connectors: + +### Local Connector + +Local authentication is used to authenticate against a local Teleport user database. This database +is managed by `tctl users` command. Teleport also supports second factor authentication +(2FA) for the local connector. There are three possible values (types) of 2FA: + + * `otp` is the default. It implements [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) + standard. You can use [Google Authenticator](https://en.wikipedia.org/wiki/Google_Authenticator) or + [Authy](https://www.authy.com/) or any other TOTP client. + * `u2f` implements [U2F](https://en.wikipedia.org/wiki/Universal_2nd_Factor) standard for utilizing hardware (USB) + keys for second factor. + * `off` turns off second factor authentication. + +Here is an example of this setting in the `teleport.yaml`: + +```yaml +auth_service: + authentication: + type: local + second_factor: off +``` + +### Github OAuth 2.0 Connector + +This connector implements Github OAuth 2.0 authentication flow. Please refer +to Github documentation on [Creating an OAuth App](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) +to learn how to create and register an OAuth app. + +Here is an example of this setting in the `teleport.yaml`: + +```yaml +auth_service: + authentication: + type: github +``` + +See [Github OAuth 2.0](#github-oauth-20) for details on how to configure it. + +### SAML + +This connector type implements SAML authentication. It can be configured +against any external identity manager like Okta or Auth0. This feature is +only available for Teleport Enterprise. + +Here is an example of this setting in the `teleport.yaml`: + +```yaml +auth_service: + authentication: + type: saml +``` + +### OIDC + +Teleport implements OpenID Connect (OIDC) authentication, which is similar to +SAML in principle. This feature is only available for Teleport Enterprise. + +Here is an example of this setting in the `teleport.yaml`: + +```yaml +auth_service: + authentication: + type: oidc +``` + + +### FIDO U2F + +Teleport supports [FIDO U2F](https://www.yubico.com/about/background/fido/) +hardware keys as a second authentication factor. By default U2F is disabled. To start using U2F: + +* Enable U2F in Teleport configuration `/etc/teleport.yaml`. +* For CLI-based logins you have to install [u2f-host](https://developers.yubico.com/libu2f-host/) utility. +* For web-based logins you have to use Google Chrome, as it is the only browser supporting U2F at this time. + +```yaml +# snippet from /etc/teleport.yaml to show an example configuration of U2F: +auth_service: + authentication: + type: local + second_factor: u2f + # this section is needed only if second_factor is set to 'u2f' + u2f: + # app_id must point to the URL of the Teleport Web UI (proxy) accessible + # by the end users + app_id: https://localhost:3080 + # facets must list all proxy servers if there are more than one deployed + facets: + - https://localhost:3080 +``` + +For single-proxy setups, the `app_id` setting can be equal to the domain name of the +proxy, but this will prevent you from adding more proxies without changing the +`app_id`. For multi-proxy setups, the `app_id` should be an HTTPS URL pointing to +a JSON file that mirrors `facets` in the auth config. + +!!! warning "Warning": + The `app_id` must never change in the lifetime of the cluster. If the App ID + changes, all existing U2F key registrations will become invalid and all users + who use U2F as the second factor will need to re-register. + When adding a new proxy server, make sure to add it to the list of "facets" + in the configuration file, but also to the JSON file referenced by `app_id` + + +**Logging in with U2F** + +For logging in via the CLI, you must first install [u2f-host](https://developers.yubico.com/libu2f-host/). +Installing: + +```yaml +# OSX: +$ brew install libu2f-host + +# Ubuntu 16.04 LTS: +$ apt-get install u2f-host +``` + +Then invoke `tsh ssh` as usual to authenticate: + +``` +tsh --proxy ssh +``` + +!!! tip "Version Warning": + External user identities are only supported in [Teleport Enterprise](/enterprise/). Please reach + out to `sales@gravitational.com` for more information. From 1b3849f9eb25a0139b1ddd256ddaff7661c5a717 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:23:30 +0300 Subject: [PATCH 18/23] add node join, node label guides, format architecture guides --- docs/4.1.yaml | 3 + docs/4.1/architecture/auth.md | 35 ----- docs/4.1/architecture/proxy.md | 35 +++++ docs/4.1/guides/node-join.md | 244 +++++++++++++++++++++++++++++++++ docs/4.1/guides/node-labels.md | 98 +++++++++++++ 5 files changed, 380 insertions(+), 35 deletions(-) create mode 100644 docs/4.1/guides/node-join.md create mode 100644 docs/4.1/guides/node-labels.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index bf37eff40b283..5767778965b16 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -16,6 +16,7 @@ markdown_extensions: - admonition - def_list - footnotes + - codehilite - toc: marker: '[TOC]' extra_css: [] @@ -36,6 +37,8 @@ pages: - Guides: # TODO: Add How-To Guide on Managing Nodes, Users, Trusted Clusters # etc. any common task should have a guide + - Add a Node to a Cluster: guides/node-join.md + - Label Nodes: guides/node-labels.md - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md diff --git a/docs/4.1/architecture/auth.md b/docs/4.1/architecture/auth.md index 7cd18acadd86b..797976dfbcd06 100644 --- a/docs/4.1/architecture/auth.md +++ b/docs/4.1/architecture/auth.md @@ -218,41 +218,6 @@ storage. `/var/lib/teleport/log` to allow them to combine all audit events into the same audit log. [Learn how to deploy Teleport in HA Mode.](../admin-guide#high-availability)) -## Recording Proxy Mode - -In this mode, the proxy terminates (decrypts) the SSH connection using the -certificate supplied by the client via SSH agent forwarding and then establishes -its own SSH connection to the final destination server, effectively becoming an -authorized "man in the middle". This allows the proxy server to forward SSH -session data to the auth server to be recorded, as shown below: - -![recording-proxy](../img/recording-proxy.svg?style=grv-image-center-lg) - -The recording proxy mode, although _less secure_, was added to allow Teleport -users to enable session recording for OpenSSH's servers running `sshd`, which is -helpful when gradually transitioning large server fleets to Teleport. - -We consider the "recording proxy mode" to be less secure for two reasons: - -1. It grants additional privileges to the Teleport proxy. In the default mode, - the proxy stores no secrets and cannot "see" the decrypted data. This makes a - proxy less critical to the security of the overall cluster. But if an - attacker gains physical access to a proxy node running in the "recording" - mode, they will be able to see the decrypted traffic and client keys stored - in proxy's process memory. -2. Recording proxy mode requires the SSH agent forwarding. Agent forwarding is - required because without it, a proxy will not be able to establish the 2nd - connection to the destination node. - -However, there are advantages of proxy-based session recording too. When -sessions are recorded at the nodes, a root user can add iptables rules to -prevent sessions logs from reaching the Auth Server. With sessions recorded at -the proxy, users with root privileges on nodes have no way of disabling the -audit. - -See the [admin guide](../admin-guide#recorded-sessions) to learn how to turn on the -recording proxy mode. - ## Storage Back-Ends Different types of cluster data can be configured with different storage diff --git a/docs/4.1/architecture/proxy.md b/docs/4.1/architecture/proxy.md index 986a8adc9ee19..b90772e7f966d 100644 --- a/docs/4.1/architecture/proxy.md +++ b/docs/4.1/architecture/proxy.md @@ -90,6 +90,41 @@ client `ssh` or using `tsh`: [SSH jump hosts](https://wiki.gentoo.org/wiki/SSH_jump_host) implemented using OpenSSH's `ProxyCommand`. also supports OpenSSH's ProxyJump/ssh -J implementation as of Teleport 4.1. +## Recording Proxy Mode + +In this mode, the proxy terminates (decrypts) the SSH connection using the +certificate supplied by the client via SSH agent forwarding and then establishes +its own SSH connection to the final destination server, effectively becoming an +authorized "man in the middle". This allows the proxy server to forward SSH +session data to the auth server to be recorded, as shown below: + +![recording-proxy](../img/recording-proxy.svg?style=grv-image-center-lg) + +The recording proxy mode, although _less secure_, was added to allow Teleport +users to enable session recording for OpenSSH's servers running `sshd`, which is +helpful when gradually transitioning large server fleets to Teleport. + +We consider the "recording proxy mode" to be less secure for two reasons: + +1. It grants additional privileges to the Teleport proxy. In the default mode, + the proxy stores no secrets and cannot "see" the decrypted data. This makes a + proxy less critical to the security of the overall cluster. But if an + attacker gains physical access to a proxy node running in the "recording" + mode, they will be able to see the decrypted traffic and client keys stored + in proxy's process memory. +2. Recording proxy mode requires the SSH agent forwarding. Agent forwarding is + required because without it, a proxy will not be able to establish the 2nd + connection to the destination node. + +However, there are advantages of proxy-based session recording too. When +sessions are recorded at the nodes, a root user can add iptables rules to +prevent sessions logs from reaching the Auth Server. With sessions recorded at +the proxy, users with root privileges on nodes have no way of disabling the +audit. + +See the [admin guide](../admin-guide#recorded-sessions) to learn how to turn on +the recording proxy mode. + ## More Concepts * [Architecture Overview](./architecture) diff --git a/docs/4.1/guides/node-join.md b/docs/4.1/guides/node-join.md new file mode 100644 index 0000000000000..3675f2f2e0692 --- /dev/null +++ b/docs/4.1/guides/node-join.md @@ -0,0 +1,244 @@ +## Adding Nodes to the Cluster + +This guide will show you a few different ways to generate and use join tokens. +Join tokens are used by nodes to prove that they are trusted by an admin and +should be allowed to join a cluster. Once a node has joined a cluster it can +see the IP addresses and labels of other nodes along with Teleport User data. + +[TOC] + +## Recommended Prerequisites + +* Read through the [Architecture Overview](../architecture/overview). +* Read through the [Production Guide](./production) if you are setting up +Teleport in production. +* Run _all_ nodes as [Systemd Units](./production/#systemd-unit-file) unless you +are just working in a sandbox. + +## Step 1: Generate or Set a Join Token + +There are two ways to invite nodes to join a cluster: + +* **Dynamic Tokens**: Most secure +* **Static Tokens**: Less secure + +### Option 1 (Recommended): Generate a dynamic token + +You can generate or set a short-lived token with the +[`tctl`](../cli-docs/#tctl) admin tool. + +We recommend this method rather than static tokens because dynamic tokens +automatically expire, preventing potentially malicious actors from adding nodes +to your cluster. + +The [`tctl nodes add`](../cli-docs/#tctl-nodes-add) command also shows the +current CA Pin, which validates the current private key of the Auth Server +before allowing a node to join the cluster. Read more about CA Pinning in the +[Production Guide](./production/#ca-pinning). + +```bsh +# Set a specific token value that must be used within 5 minutes +$ tctl nodes add --ttl=5m --roles=node --token=secret-token-value +The invite token: secret-token-value +This token will expire in 5 minutes + +Run this on the new node to join the cluster: + +> teleport start \ + --roles=node \ + --token=secret-token-value \ + --ca-pin=sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d \ + --auth-server=10.164.0.7:3025 + +Please note: + + - This invitation token will expire in 5 minutes + - 10.164.0.7:3025 must be reachable from the new node +``` + +If `--token` is not provided `tctl` will generate one. +```bsh +# generate a dynamic invitation token for a new node: +$ tctl nodes add --ttl=5m --roles=node +The invite token: e94d68a8a1e5821dbd79d03a960644f0 +This token will expire in 5 minutes + +Run this on the new node to join the cluster: + +> teleport start \ + --roles=node \ + --token=e94d68a8a1e5821dbd79d03a960644f0 \ + --ca-pin=sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d \ + --auth-server=10.164.0.7:3025 + +Please note: + + - This invitation token will expire in 5 minutes + - 10.164.0.7:3025 must be reachable from the new node +``` + +The command prints out a `teleport start` command which you can run on a node +that you want to add to a cluster. We only recommend this option for sandbox or +staging environments as you are getting started with teleport. + +!!! warning "Resiliency Warning" + If the process fails unexpectedly or the node restarts the node service will not + restart automatically. See how to [add a node with a config + file](#option-1-recommended-run-the-node-service-with-a-config-file) for a more + resilient method. + +### Option 2: Set a static token in config file + +You can set a static token in the `auth_service` section of your configuration +file. The list of `tokens` represent the role(s) of a cluster node and tokens +that they can use. We encourage the use of [dynamic tokens](#option-1-recommended-generate-a-dynamic-token) for security, +but using static token may be the best option for some teams. + +The tokens set in the config will not expire unless they are removed from the +config and the `teleport` daemon is restarted. Anyone with the token and access +to the auth server's network can add nodes to the cluster. + +If you are adding a node using a static token we recommend that you add the +[`ca_pin`](./production/#ca-pinning) key to the `teleport` section on the node +to be added. Here's an example of how this works. + +```bash +# get the CA Pin on the Auth server +$ tctl status +Cluster grav-00 +User CA never updated +Host CA never updated +CA pin sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d +``` + +Edit the yaml config on the Auth Server. + +```diff +# Add static toke "secret-token-value" +# which will allow nodes with role `node` to join +auth_service: + enabled: true + tokens: + # This static token allows new hosts to join the cluster as + # "node" role with the token `secret-token-value`. ++ - "node:secret-token-value" +``` + +You will need to restart teleport for the static token configuration to take +effect. In production we recommend using a [Systemd Unit File](./production) to +manage the `teleport` daemon so we show `systemctl` commands below. If you +are not using `systemctl` currently just `Ctrl-C` to or `kill ` to +kill the current teleport process. + +```bash +$ systemctl reload teleport +# Tail the teleport service logs to confirm that it worked +$ journalctl -fu teleport +``` + + + +## Step 2: Use a Node Join Token + +In the previous step [`tctl nodes add`](../cli-docs/#tctl-nodes-add) printed out +a `teleport start` command which you can run on the node you want to add. You +can use this command for testing, but be cautious! If the process fails +unexpectedly or the node restarts the node service will not restart +automatically. Fix this by running [Teleport as a System +Unit](./production/#systemd-unit-file). + +### Option 1 (Recommended): Run the node service with a config file + +Check that the join token you are using is recognized by the Auth Service +```bash +# run this on an auth node +# here we have a static which will never expire +# and a dynamic token which will expire +$ tctl tokens ls +Token Type Expiry Time (UTC) +-------------------------------- ---- ------------------- +fuzzywuzzywasabear Node never +b150b9349b4ca40bcb4df298a2f50152 Node 17 Oct 19 11:18 UTC +``` + +Add the token and [CA Pin](../production/#ca-pinning) to your config file + +```diff +# Node Service Config +# You may have other config +# this example shows a minimal config +# for running only the `node` service +teleport: ++ auth_token: "fuzzywuzzywasabear" ++ ca_pin: "sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d" + auth_servers: + - 10.164.0.7:3025 +ssh_service: + enabled: "yes" +auth_service: + enabled: "no" +proxy_service: + enabled: "no" +``` + +Save these files and restart/start both services + +```bash +# on auth server +$ systemctl reload teleport +``` + +```bash +# on node which is joining +$ systemctl start teleport +``` + +Use `systemctl status teleport` or `journalctl -u teleport` to check the logs +and make sure there are no errors. + +!!! warning "Certificate Warnings" + If you previously joined the cluster using an old token or certificate + you may see an error `x509: certificate signed by unknown authority`. + This is due to a mismatch between the Auth Server state and the information presented by the node. To resolve it you can remove the node certificate + with `rm -r /var/lib/teleport` on the node and/or + `tctl rm nodes/` on the auth node to make Teleport Auth "forget" + the node and start fresh. + +### Option 2: Start the node service via the CLI + +This option can be used when to quickly add a node to a cluster with minimal +configuration. We only recommend this option for sandbox or staging environments +as you are getting started with teleport. Get the CA Pin of the auth node by +running `tctl status`. + +```bash +# adding a new regular SSH node to the cluster: +$ teleport start --roles=node --token=b150b9349b4ca40bcb4df298a2f50152 +> --auth-server=10.164.0.7:3025 +> --ca-pin=sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d +``` + +## Next Steps + +As new nodes come online, they start sending ping requests every few seconds +to the Auth service. You can see the nodes that have successfully joined +by running [`tctl nodes ls`](../cli-docs/#tctl-nodes-ls) + +```bsh +$ tctl nodes ls + +Node Name Node ID Address Labels +--------- ------- ------- ------ +turing d52527f9-b260-41d0-bb5a-e23b0cfe0f8f 10.1.0.5:3022 distro:ubuntu +dijkstra c9s93fd9-3333-91d3-9999-c9s93fd98f43 10.1.0.6:3022 distro:debian +``` + +!!! tip "Join Tokens are only used for the initial connection" + It is important to understand that join tokens are only used to establish + the connection for the first time. The clusters will exchange certificates + and won't be using the token to re-establish the connection in the future. + Future connections will rely upon the [node certificate](../architecture/auth/#authentication-in-teleport) to identify and authorize a node. + +**More Guides** +* Add Labels to Nodes +* Revoke Tokens diff --git a/docs/4.1/guides/node-labels.md b/docs/4.1/guides/node-labels.md new file mode 100644 index 0000000000000..7f88ea6531e60 --- /dev/null +++ b/docs/4.1/guides/node-labels.md @@ -0,0 +1,98 @@ +## Label Nodes + +[TOC] + +Teleport allows for the application of arbitrary key-value pairs to each node, +called labels. There are two kinds of labels: + +1. `static labels` do not change over time, while `teleport` process is running. + Examples of static labels are physical location of nodes, name of the + environment (staging vs production), etc. + +2. `dynamic labels` also known as "label commands" allow to generate labels at + runtime. Teleport will execute an external command on a node at a + configurable frequency and the output of a command becomes the label value. + Examples include reporting load averages, presence of a process, time after + last reboot, etc. + +There are two ways to configure node labels. + +1. Via command line, by using `--labels` flag to `teleport start` command. +2. Using `/etc/teleport.yaml` configuration file on the nodes. + +## Example 1: Add labels on the command line + +To define labels as command line arguments, use `--labels` flag like shown below +when you start the `teleport --roles=node` service. This method works well for +static labels or simple commands. + +In this example the node will have the static label `env=sanbox`. Teleport will +run `uptime -p` every minute and assign the result to the label key `uptime`. It +will also run `uname -r` and assign it to the label key `kernel` + +```bash +# first kill previous instances of teleport +# with Ctrl-C or kill +$ teleport start --roles=node +> --labels env=sandbox,uptime=[1m:"uptime -p"],kernel=[1h:"uname -r"] +> --token=secret-token-value \ +> --ca-pin=sha256:1146cdd2b887772dcc2e879232c8f60012a839f7958724ce5744005474b15b9d \ +> --auth-server=10.164.0.7:3025 +``` + +!!! warning "Resiliency Warning" + If the process fails unexpectedly or the node restarts the node service will not + restart automatically. Run the `teleport` binary as a [Systemd Unit](./production/#systemd-unit-file) to avoid this. + +## Example 2: Add labels to the config file + +Alternatively, you can update `labels` via a configuration file: + +```yaml +ssh_service: + enabled: "yes" + # Static labels are simple key/value pairs: + labels: + environment: test +``` + +To configure dynamic labels via a configuration file, define a `commands` array +as shown below. The `name` key is the label key: `arch` in the example here. + +```yaml +ssh_service: + enabled: "yes" + # Dynamic labels are listed under "commands": + commands: + - name: arch + command: ['/usr/bin/uname', '-r', '-s'] + # this setting tells teleport to execute the command uname + # once an hour. `period` cannot be less than one minute. + period: 1h0m0s +``` + +`/path/to/executable` must be a valid executable command with the (i.e. +executable bit must be set). If you run the `teleport` daemon as `root` this +should not be an issue, but if `teleport` runs as a non-root user in your system +check the permissions of the executable with `ls -l `. +Modify file permissions from an authorized OS user with the `chmod +x` command. +If the executable is a shell script it must have a proper [shebang +line](https://en.wikipedia.org/wiki/Shebang_(Unix)). + +**Syntax Tip:** notice that `command` setting is an array where the first element +is a valid executable and each subsequent element is an argument, i.e: + +```yaml +# valid syntax: +command: ["/bin/uname", "-m"] + +# INVALID syntax: +command: ["/bin/uname -m"] + +# if you want to pipe several bash commands together, here's how to do it: +# notice how ' and " are interchangeable and you can use it for quoting: +command: ["/bin/sh", "-c", "uname -a | egrep -o '[0-9]+\.[0-9]+\.[0-9]+'"] +``` +**More Guides** +* Add Nodes to a Cluster +* Revoke Tokens From e729aa612fbc285be5531ceb2154559eb032e8b5 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:24:44 +0300 Subject: [PATCH 19/23] add wip token management guide --- docs/4.1.yaml | 1 + docs/4.1/guides/token-management.md | 41 +++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+) create mode 100644 docs/4.1/guides/token-management.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index 5767778965b16..9e9937baeac9a 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -39,6 +39,7 @@ pages: # etc. any common task should have a guide - Add a Node to a Cluster: guides/node-join.md - Label Nodes: guides/node-labels.md + - Manage Tokens: guides/token-management.md - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md diff --git a/docs/4.1/guides/token-management.md b/docs/4.1/guides/token-management.md new file mode 100644 index 0000000000000..3d4502ed7ddd3 --- /dev/null +++ b/docs/4.1/guides/token-management.md @@ -0,0 +1,41 @@ +## Manage Tokens + +TODO: WIP + +This guide will show you how to add, remove, and view active tokens. + +There are several kinds of tokens that you will see as an admin: + +* Node Join Tokens +* User Join Tokens +* Trusted Cluster Tokens + +## Revoking Invitations + +As you have seen above, Teleport uses tokens to invite users to a cluster (sign-up tokens) or +to add new nodes to it (provisioning tokens). + +Both types of tokens can be revoked before they can be used. To see a list of outstanding tokens, +run this command: + +```bsh +$ tctl tokens ls + +Token Role Expiry Time (UTC) +----- ---- ----------------- +eoKoh0caiw6weoGupahgh6Wuo7jaTee2 Proxy never +696c0471453e75882ff70a761c1a8bfa Node 17 May 16 03:51 UTC +6fc5545ab78c2ea978caabef9dbd08a5 Signup 17 May 16 04:24 UTC +``` + +In this example, the first token has a "never" expiry date because it is a static token configured via a config file. + +The 2nd token with "Node" role was generated to invite a new node to this cluster. And the +3rd token was generated to invite a new user. + +The latter two tokens can be deleted (revoked) via `tctl tokens del` command: + +```yaml +$ tctl tokens del 696c0471453e75882ff70a761c1a8bfa +Token 696c0471453e75882ff70a761c1a8bfa has been deleted +``` \ No newline at end of file From e0dd299a14328316845a8f6782b180ba34b3e164 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:33:26 +0300 Subject: [PATCH 20/23] add wip trusted cluster guide, rm guide content from admin-guide --- docs/4.1.yaml | 1 + docs/4.1/admin-guide.md | 281 +--------------------------- docs/4.1/guides/trusted-clusters.md | 56 ++++++ 3 files changed, 59 insertions(+), 279 deletions(-) create mode 100644 docs/4.1/guides/trusted-clusters.md diff --git a/docs/4.1.yaml b/docs/4.1.yaml index 9e9937baeac9a..640334a091be0 100644 --- a/docs/4.1.yaml +++ b/docs/4.1.yaml @@ -40,6 +40,7 @@ pages: - Add a Node to a Cluster: guides/node-join.md - Label Nodes: guides/node-labels.md - Manage Tokens: guides/token-management.md + - Add a Trusted Cluster: guides/trusted-clusters.md - Integrations: - Okta: ssh_okta.md - Active Directory (ADFS): ssh_adfs.md diff --git a/docs/4.1/admin-guide.md b/docs/4.1/admin-guide.md index 6a24c7a1b5d36..7f97ed20eee50 100644 --- a/docs/4.1/admin-guide.md +++ b/docs/4.1/admin-guide.md @@ -2,6 +2,8 @@ ## Adding and Deleting Users + + This section covers internal user identities, i.e.user accounts created and stored in Teleport's internal storage. Most production users of Teleport use _external_ users via [Github](#github-oauth-20) or [Okta](ssh_okta) or any other @@ -95,285 +97,6 @@ Some fields in the user record are reserved for internal use. Some of them will be finalized and documented in the future versions. Fields like `is_locked` or `traits/logins` can be used starting in version 2.3 -## Adding Nodes to the Cluster - -Teleport is a "clustered" system, meaning it only allows access to nodes -(servers) that had been previously granted cluster membership. - -A cluster membership means that a node receives its own host certificate signed -by the cluster's auth server. To receive a host certificate upon joining a -cluster, a new Teleport host must present an "invite token". An invite token -also defines which role a new host can assume within a cluster: `auth` , `proxy` -or `node` . - -There are two ways to create invitation tokens: - -* **Static Tokens** are easy to use and somewhat less secure. -* **Dynamic Tokens** are more secure but require more planning. - -### Static Tokens - -Static tokens are defined ahead of time by an administrator and stored in the -auth server's config file: - -``` yaml -# Config section in `/etc/teleport.yaml` file for the auth server -auth_service: - enabled: true - tokens: - # This static token allows new hosts to join the cluster as "proxy" or "node" - - - "proxy,node:secret-token-value" - - # A token can also be stored in a file. In this example the token for adding - # new auth servers is stored in /path/to/tokenfile - - - "auth:/path/to/tokenfile" - -``` - -### Short-lived Tokens - -A more secure way to add nodes to a cluster is to generate tokens as they are -needed. Such token can be used multiple times until its time to live (TTL) -expires. - -Use the [ `tctl` ](../cli-docs/#tctl) tool to register a new invitation token (or -it can also generate a new token for you). In the following example a new token -is created with a TTL of 5 minutes: - -``` bsh -$ tctl nodes add --ttl=5m --roles=node,proxy --token=secret-value -The invite token: secret-value -``` - -If `--token` is not provided, [ `tctl` ](../cli-docs/#tctl) will generate one: - -``` bsh -# generate a short-lived invitation token for a new node: -$ tctl nodes add --ttl=5m --roles=node,proxy -The invite token: e94d68a8a1e5821dbd79d03a960644f0 - -# you can also list all generated non-expired tokens: -$ tctl tokens ls -Token Type Expiry Time ---------------- ----------- --------------- -e94d68a8a1e5821dbd79d03a960644f0 Node 25 Sep 18 00:21 UTC - -# ... or revoke an invitation before it's used: -$ tctl tokens rm e94d68a8a1e5821dbd79d03a960644f0 -``` - -### Using Node Invitation Tokens - -Both static and short-lived tokens are used the same way. Execute the following -command on a new node to add it to a cluster: - -``` bsh -# adding a new regular SSH node to the cluster: -$ teleport start --roles=node --token=secret-token-value --auth-server=10.0.10.5 - -# adding a new regular SSH node using Teleport Node Tunneling: -$ teleport start --roles=node --token=secret-token-value --auth-server=teleport-proxy.example.com:3080 - -# adding a new proxy service on the cluster: -$ teleport start --roles=proxy --token=secret-token-value --auth-server=10.0.10.5 -``` - -As new nodes come online, they start sending ping requests every few seconds to -the CA of the cluster. This allows users to explore cluster membership and size: - -``` bsh -$ tctl nodes ls - -Node Name Node ID Address Labels ---------- ------- ------- ------ -turing d52527f9-b260-41d0-bb5a-e23b0cfe0f8f 10.1.0.5:3022 distro:ubuntu -dijkstra c9s93fd9-3333-91d3-9999-c9s93fd98f43 10.1.0.6:3022 distro:debian -``` - -### Untrusted Auth Servers - -Teleport nodes use the HTTPS protocol to offer the join tokens to the auth -server running on `10.0.10.5` in the example above. In a zero-trust environment, -you must assume that an attacker can highjack the IP address of the auth server -e.g. `10.0.10.5` . - -To prevent this from happening, you need to supply every new node with an -additional bit of information about the auth server. This technique is called -"CA Pinning". It works by asking the auth server to produce a "CA Pin", which is -a hashed value of it's private key, i.e.it cannot be forged by an attacker. - -On the auth server: - -``` bash -$ tctl status -Cluster staging.example.com -User CA never updated -Host CA never updated -CA pin sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 -``` - -The "CA pin" at the bottom needs to be passed to the new nodes when they're -starting for the first time, i.e.when they join a cluster: - -Via CLI: - -``` bash -$ teleport start \ - --roles=node \ - --token=1ac590d36493acdaa2387bc1c492db1a \ - --ca-pin=sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 \ - --auth-server=10.12.0.6:3025 -``` - -or via `/etc/teleport.yaml` on a node: - -``` yaml -teleport: - auth_token: "1ac590d36493acdaa2387bc1c492db1a" - ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" - auth_servers: - - - "10.12.0.6:3025" - -``` - -!!! warning "Warning": - If a CA pin not provided, Teleport node will join a - cluster but it will print a `WARN` message (warning) into it's standard - error output. - -!!! warning "Warning": - The CA pin becomes invalid if a Teleport administrator - performs the CA rotation by executing - [ `tctl auth rotate` ](../cli-docs/#tctl-auth-rotate) . - -## Revoking Invitations - -As you have seen above, Teleport uses tokens to invite users to a cluster -(sign-up tokens) or to add new nodes to it (provisioning tokens). - -Both types of tokens can be revoked before they can be used. To see a list of -outstanding tokens, run this command: - -``` bsh -$ tctl tokens ls - -Token Role Expiry Time (UTC) ------ ---- ----------------- -eoKoh0caiw6weoGupahgh6Wuo7jaTee2 Proxy never -696c0471453e75882ff70a761c1a8bfa Node 17 May 16 03:51 UTC -6fc5545ab78c2ea978caabef9dbd08a5 Signup 17 May 16 04:24 UTC -``` - -In this example, the first token has a "never" expiry date because it is a -static token configured via a config file. - -The 2nd token with "Node" role was generated to invite a new node to this -cluster. And the 3rd token was generated to invite a new user. - -The latter two tokens can be deleted (revoked) via [`tctl tokens -del`](../cli-docs/#tctl-tokens-rm) command: - -``` yaml -$ tctl tokens del 696c0471453e75882ff70a761c1a8bfa -Token 696c0471453e75882ff70a761c1a8bfa has been deleted -``` - -## Labeling Nodes - -In addition to specifying a custom nodename, Teleport also allows for the -application of arbitrary key:value pairs to each node, called labels. There are -two kinds of labels: - -1. `static labels` do not change over time, while - [ `teleport` ](../cli-docs/#teleport) process is running. - Examples of static labels are physical location of nodes, name of the - environment (staging vs production), etc. - -2. `dynamic labels` also known as "label commands" allow to generate labels at - runtime. Teleport will execute an external command on a node at a - configurable frequency and the output of a command becomes the label value. - Examples include reporting load averages, presence of a process, time after - last reboot, etc. - -There are two ways to configure node labels. - -1. Via command line, by using `--labels` flag to `teleport start` command. -2. Using `/etc/teleport.yaml` configuration file on the nodes. - -To define labels as command line arguments, use `--labels` flag like shown -below. This method works well for static labels or simple commands: - -``` yaml -$ teleport start --labels uptime=[1m:"uptime -p"],kernel=[1h:"uname -r"] -``` - -Alternatively, you can update `labels` via a configuration file: - -``` yaml -ssh_service: - enabled: "yes" - # Static labels are simple key/value pairs: - labels: - environment: test -``` - -To configure dynamic labels via a configuration file, define a `commands` array -as shown below: - -``` yaml -ssh_service: - enabled: "yes" - # Dynamic labels AKA "commands": - commands: - - + name: arch - - command: ['/path/to/executable', 'flag1', 'flag2'] - # this setting tells teleport to execute the command above - # once an hour. this value cannot be less than one minute. - period: 1h0m0s -``` - -`/path/to/executable` must be a valid executable command (i.e.executable bit -must be set) which also includes shell scripts with a proper [shebang -line](https://en.wikipedia.org/wiki/Shebang_(Unix)). - -**Important:** notice that `command` setting is an array where the first element -is a valid executable and each subsequent element is an argument, i.e: - -``` yaml -# valid syntax: -command: ["/bin/uname", "-m"] - -# INVALID syntax: -command: ["/bin/uname -m"] - -# if you want to pipe several bash commands together, here's how to do it: -# notice how ' and " are interchangeable and you can use it for quoting: -command: ["/bin/sh", "-c", "uname -a | egrep -o '[0-9]+\.[0-9]+\.[0-9]+'"] -``` - -## Audit Log - -Teleport logs every SSH event into its audit log. There are two components of -the audit log: - -1. **SSH Events:** Teleport logs events like successful user logins along with - - the metadata like remote IP address, time and the session ID. - -2. **Recorded Sessions:** Every SSH shell session is recorded and can be - - replayed later. The recording is done by the nodes themselves, by default, - but can be configured to be done by the proxy. - -Refer to the ["Audit Log" chapter in the Teleport -Architecture](architecture#audit-log) to learn more about how the audit Log and -session recording are designed. - ### SSH Events Teleport supports multiple storage back-ends for storing the SSH events. The diff --git a/docs/4.1/guides/trusted-clusters.md b/docs/4.1/guides/trusted-clusters.md new file mode 100644 index 0000000000000..0a90000e0eae0 --- /dev/null +++ b/docs/4.1/guides/trusted-clusters.md @@ -0,0 +1,56 @@ +### Untrusted Auth Servers + +Teleport nodes use the HTTPS protocol to offer the join tokens to the auth +server running on `10.0.10.5` in the example above. In a zero-trust environment, +you must assume that an attacker can highjack the IP address of the auth server +e.g. `10.0.10.5` . + +To prevent this from happening, you need to supply every new node with an +additional bit of information about the auth server. This technique is called +"CA Pinning". It works by asking the auth server to produce a "CA Pin", which is +a hashed value of it's private key, i.e.it cannot be forged by an attacker. + +On the auth server: + +``` bash +$ tctl status +Cluster staging.example.com +User CA never updated +Host CA never updated +CA pin sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 +``` + +The "CA pin" at the bottom needs to be passed to the new nodes when they're +starting for the first time, i.e.when they join a cluster: + +Via CLI: + +``` bash +$ teleport start \ + --roles=node \ + --token=1ac590d36493acdaa2387bc1c492db1a \ + --ca-pin=sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1 \ + --auth-server=10.12.0.6:3025 +``` + +or via `/etc/teleport.yaml` on a node: + +``` yaml +teleport: + auth_token: "1ac590d36493acdaa2387bc1c492db1a" + ca_pin: "sha256:7e12c17c20d9cb504bbcb3f0236be3f446861f1396dcbb44425fe28ec1c108f1" + auth_servers: + + - "10.12.0.6:3025" + +``` + +!!! warning "Warning": + If a CA pin not provided, Teleport node will join a + cluster but it will print a `WARN` message (warning) into it's standard + error output. + +!!! warning "Warning": + The CA pin becomes invalid if a Teleport administrator + performs the CA rotation by executing + [ `tctl auth rotate` ](../cli-docs/#tctl-auth-rotate) . \ No newline at end of file From 609bb9c4e854c5a204a4ffefb546bd9aae3ab318 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:44:06 +0300 Subject: [PATCH 21/23] add todos to production --- docs/4.1/production.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/4.1/production.md b/docs/4.1/production.md index b4ba0fe8dadbd..4965a0a0f3f69 100644 --- a/docs/4.1/production.md +++ b/docs/4.1/production.md @@ -1,8 +1,8 @@ # Production Guide -Minimal Config example +TODO: This is WIP document -Include security considerations. Address vulns in quay? + [TOC] @@ -146,6 +146,10 @@ use them to add nodes. ## Security Considerations + + + + ### CA Pinning Teleport nodes use the HTTPS protocol to offer the join tokens to the auth From 29041bd809a4bbbef777e33ac662dae04ca1829b Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:52:13 +0300 Subject: [PATCH 22/23] checkout auth guide from base branch --- docs/4.1/architecture/auth.md | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/docs/4.1/architecture/auth.md b/docs/4.1/architecture/auth.md index 797976dfbcd06..7cd18acadd86b 100644 --- a/docs/4.1/architecture/auth.md +++ b/docs/4.1/architecture/auth.md @@ -218,6 +218,41 @@ storage. `/var/lib/teleport/log` to allow them to combine all audit events into the same audit log. [Learn how to deploy Teleport in HA Mode.](../admin-guide#high-availability)) +## Recording Proxy Mode + +In this mode, the proxy terminates (decrypts) the SSH connection using the +certificate supplied by the client via SSH agent forwarding and then establishes +its own SSH connection to the final destination server, effectively becoming an +authorized "man in the middle". This allows the proxy server to forward SSH +session data to the auth server to be recorded, as shown below: + +![recording-proxy](../img/recording-proxy.svg?style=grv-image-center-lg) + +The recording proxy mode, although _less secure_, was added to allow Teleport +users to enable session recording for OpenSSH's servers running `sshd`, which is +helpful when gradually transitioning large server fleets to Teleport. + +We consider the "recording proxy mode" to be less secure for two reasons: + +1. It grants additional privileges to the Teleport proxy. In the default mode, + the proxy stores no secrets and cannot "see" the decrypted data. This makes a + proxy less critical to the security of the overall cluster. But if an + attacker gains physical access to a proxy node running in the "recording" + mode, they will be able to see the decrypted traffic and client keys stored + in proxy's process memory. +2. Recording proxy mode requires the SSH agent forwarding. Agent forwarding is + required because without it, a proxy will not be able to establish the 2nd + connection to the destination node. + +However, there are advantages of proxy-based session recording too. When +sessions are recorded at the nodes, a root user can add iptables rules to +prevent sessions logs from reaching the Auth Server. With sessions recorded at +the proxy, users with root privileges on nodes have no way of disabling the +audit. + +See the [admin guide](../admin-guide#recorded-sessions) to learn how to turn on the +recording proxy mode. + ## Storage Back-Ends Different types of cluster data can be configured with different storage From bb24c6988f6704b589b2815cbabdf6d456585214 Mon Sep 17 00:00:00 2001 From: Heather Young Date: Thu, 17 Oct 2019 17:54:13 +0300 Subject: [PATCH 23/23] checkout proxy guide from base branch --- docs/4.1/architecture/proxy.md | 35 ---------------------------------- 1 file changed, 35 deletions(-) diff --git a/docs/4.1/architecture/proxy.md b/docs/4.1/architecture/proxy.md index b90772e7f966d..986a8adc9ee19 100644 --- a/docs/4.1/architecture/proxy.md +++ b/docs/4.1/architecture/proxy.md @@ -90,41 +90,6 @@ client `ssh` or using `tsh`: [SSH jump hosts](https://wiki.gentoo.org/wiki/SSH_jump_host) implemented using OpenSSH's `ProxyCommand`. also supports OpenSSH's ProxyJump/ssh -J implementation as of Teleport 4.1. -## Recording Proxy Mode - -In this mode, the proxy terminates (decrypts) the SSH connection using the -certificate supplied by the client via SSH agent forwarding and then establishes -its own SSH connection to the final destination server, effectively becoming an -authorized "man in the middle". This allows the proxy server to forward SSH -session data to the auth server to be recorded, as shown below: - -![recording-proxy](../img/recording-proxy.svg?style=grv-image-center-lg) - -The recording proxy mode, although _less secure_, was added to allow Teleport -users to enable session recording for OpenSSH's servers running `sshd`, which is -helpful when gradually transitioning large server fleets to Teleport. - -We consider the "recording proxy mode" to be less secure for two reasons: - -1. It grants additional privileges to the Teleport proxy. In the default mode, - the proxy stores no secrets and cannot "see" the decrypted data. This makes a - proxy less critical to the security of the overall cluster. But if an - attacker gains physical access to a proxy node running in the "recording" - mode, they will be able to see the decrypted traffic and client keys stored - in proxy's process memory. -2. Recording proxy mode requires the SSH agent forwarding. Agent forwarding is - required because without it, a proxy will not be able to establish the 2nd - connection to the destination node. - -However, there are advantages of proxy-based session recording too. When -sessions are recorded at the nodes, a root user can add iptables rules to -prevent sessions logs from reaching the Auth Server. With sessions recorded at -the proxy, users with root privileges on nodes have no way of disabling the -audit. - -See the [admin guide](../admin-guide#recorded-sessions) to learn how to turn on -the recording proxy mode. - ## More Concepts * [Architecture Overview](./architecture)