Clean
The Jsonnet language expresses your Kubernetes apps more clearly than YAML -ever did
diff --git a/pr-preview/pr-1220/404.html b/pr-preview/pr-1220/404.html deleted file mode 100644 index 9da3456d0..000000000 --- a/pr-preview/pr-1220/404.html +++ /dev/null @@ -1,54 +0,0 @@ -
Grafana Tanka is the robust configuration utility for your Kubernetes cluster, powered by the unique Jsonnet language
Tanka supports CLI completion for bash
, zsh
and fish
.
As tanka is its own completion handler, it needs to hook into your shell’s
-configuration file (.bashrc
, etc).
When using other shells than bash
, Tanka relies on a Bash compatibility
-mode. It enables this automatically when installing, but please make sure no
-other completion (e.g. OhMyZsh) interferes with this, or your completion might
-not work properly.
-It sometimes depends on the order the completions are being loaded, so try
-putting Tanka before or after the others.
Tanka’s behavior can be customized per Environment using a file called spec.json
It is possible to access above data from Jsonnet:
-Tanka supports two different ways of computing differences between the local
-configuration and the live cluster state: Either native kubectl diff -f -
-is used, which gives the best possible results, but is only possible for
-clusters with
-server-side diff
-support (Kubernetes 1.13+).
When this is not available, Tanka falls back to subset
mode.
You can specify the diff-strategy to use on the command line as well:
- -The native diff mode is recommended, because it uses kubectl diff
underneath,
-which sends the objects to the Kubernetes API server and computes the
-differences over there.
This has the huge benefit that all possible changes by webhooks and other -internal components of Kubernetes can be encountered as well.
-However, this is a fairly new feature and only available on Kubernetes 1.13 or -later. Only the API server (master nodes) needs to have that -version, worker nodes do not matter.
-There is a
-known issue
-with kubectl diff
, which affects ports configured to use both TCP and UDP.
There are two additional modes which extend native
: validate
and server
.
-While all kubectl diff
commands are sent to the API server, these two
-methods take advantage of an additional server-side diff mode (which uses the
-kubectl diff --server-side
flag, complementing the
-server-side apply mode).
Since a plain server
diff often produces cruft, and wouldn’t be representative
-of a client-side apply, the validate
method allows the server-side diff to
-check that all models are valid server-side, but still displays the native
-diff output to the user.
If native diffing is not supported by your cluster, Tanka provides subset diff -as a fallback method.
-Subset diff only compares fields present in the local configuration and -ignores all other fields. When you remove a field locally, you will see no -differences.
-This is required, because Kubernetes adds dynamic fields to the state during -runtime, which we cannot know of on the client side. To produce a somewhat -usable output, we can effectively only compare what we already know about.
-If this is a problem for you, consider switching to native mode.
-You can use external diff utilities by setting the environment variable
-KUBECTL_EXTERNAL_DIFF
. If you want to use a GUI or interactive diff utility
-you must also set KUBECTL_INTERACTIVE_DIFF=1
to prevent Tanka from capturing
-stdout.
Tanka uses the following directories and special files:
-Tanka organizes configuration in environments. For the rationale behind this, -see the section in the tutorial.
-An environment consists of at least two files:
-This file configures environment properties such as cluster connection
-(spec.apiServer
), default namespace (spec.namespace
), etc.
For the full set of options, see the Golang source -code.
-Like other programming languages, Jsonnet needs an entrypoint into the
-evaluation, something to begin with. main.jsonnet
is exactly this: The very
-first file being evaluated, importing or directly specifying everything required
-for this specific environment.
When talking about directories, Tanka uses the following terms:
- - - - - - - - - - - - - - - - - - - - -Term | Description | Identifier file |
---|---|---|
rootDir | The root of your project | jsonnetfile.json or tkrc.yaml |
baseDir | The directory of the current environment | main.jsonnet |
Regardless what subdirectory of the project you are in, Tanka will always be
-able to identify both directories, by searching for the identifier files in the
-parent directories.
-Tanka needs these for correctly setting up the import paths.
This is similar to how git
always works, by looking for the .git
directory.
Tanka relies heavily on code-reuse, so libraries are a natural thing. Roughly -spoken, they can be imported from two paths:
-/lib
: Project local libraries/vendor
External librariesFor more details consider the import paths.
-jb
records all external packages installed in a file called
-jsonnetfile.json
. This file is the source of truth about what should be
-included in vendor/
. However, it should only include what is really directly
-required, all recursive dependencies will be handled just fine.
jsonnetfile.lock.json
is generated on every run of jsonnet-bundler, including
-a list of packages that must be included in vendor/
, along with the exact
-version and a sha256
hash of the package contents.
Both files should be checked into source control: The jsonnetfile.json
-specifies what you need and the jsonnetfile.lock.json
is important to make
-sure that subsequent jb install
invocations always do the exact same thing.
Description: Path to the jb
tool executable
-Default: $PATH/jb
Description: Path to the kubectl
tool executable
-Default: $PATH/kubectl
Description: Print all calls to kubectl
-Default: false
Description: Path to the helm
executable
-Default: $PATH/helm
Description: Path to the kustomize
executable
-Default: $PATH/kustomize
Description: Pager to use when displaying output. Set to an empty string to disable paging.
-Default: $PAGER
Description: Pager to use when displaying output. Only used if TANKA_PAGER is not set. Set to an empty string to disable paging.
-Default: less --RAW-CONTROL-CHARS --quit-if-one-screen --no-init
Tanka provides you with a day-to-day workflow for working with Kubernetes clusters:
-tk show
for quickly checking the YAML representation looks goodtk diff
to ensure your changes will behave like they shouldtk apply
makes it happenHowever sometimes it can be required to integrate with other tooling that does
-only support .yaml
files.
For that case, tk export
can be used:
This will create a separate .yaml
file for each Kubernetes resource included in your Jsonnet.
Tanka by default uses the following pattern:
- -If that does not fit your need, you can provide your own pattern using the --format
flag:
--The syntax is Go
-text/template
. See https://pkg.go.dev/text/template -for reference.
This would include the label named app
, the name
and kind
of the resource:
You can optionally use the template function lower
for lower-casing fields, e.g. in the above example
would yield
- -etc.
-You can also use a different file extension by providing --extension='yml'
, for example.
Tanka can also export multiple inline environments, as showcased in Use case: consistent inline
-environments. This follows the same
-principles as describe before with the addition that you can also refer to Environment specific data through the env
-keyword.
For example an export might refer to data from the Environment spec:
- -Even more advanced use cases allow you to export multiple environments in a single execution:
- -When exporting a large amount of environments, jsonnet evaluation can become a bottleneck. To speed up the process, Tanka provides a few optional features.
-Given multiple environments, one may want to only export the environments that were modified since the last export. This is enabled by passing both the --merge-strategy=replace-envs
flags.
When these flags are passed, Tanka will:
-manifest.json
file that is generated by Tanka when exporting. The related entries are also removed from the manifest.json
file.manifest.json
file and re-export it.Tanka provides the tk tool importers
command to figure out which main.jsonnet
need to be re-exported based on what files were modified in a workspace.
If, for example, the lib/my-lib/main.libsonnet
file was modified, you could run the command like this to find which files to export:
Note that deleted environments need special consideration when doing this.
-The tk tool importers
utility only works with existing files so deleting an environment will result in stale manifest.json
entries and moving an environment will result in manifest conflicts.
-In order to correctly handle deleted environments, they need to be passed to the export command:
Read this blog post for more information about memory ballasts.
-For large environments that load lots of data into memory on evaluation, a memory ballast can dramatically improve performance. This feature is exposed through the --mem-ballast-size-bytes
flag on the export command.
Anecdotally (Grafana Labs), environments that took around a minute to load were able to load in around 45 secs with a ballast of 5GB (--mem-ballast-size-bytes=5368709120
). Decreasing the ballast size resulted in negative impact on performance, and increasing it more did not result in any noticeable impact.
Tanka can also cache the results of the export. This is useful if you often export the same files and want to avoid recomputing them. The cache key is calculated from the main file and all of its transitive imports, so any change to any file possibly used in an environment will invalidate the cache.
-This is configured by two flags:
---cache-path
: The local filesystem path where the cache will be stored. The cache is a flat directory of json files (one per environment).--cache-envs
: If exporting multiple environments, this flag can be used to specify, with regexes, which environments to cache. If not specified, all environments are cached.Notes:
---cache-path
to a FUSE mount, such as s3fs
Jsonnet is a data templating language, originally created by Google.
-It is a superset of JSON, which adds common structures from full programming -languages to data modeling. Because it being a superset of JSON and ultimately -always compiling to JSON, it is guaranteed that the output will be valid JSON -(or YAML).
-By allowing functions and imports, rich abstraction is possible, even across -project boundaries.
-For more, refer to the official documentation: https://jsonnet.org/
-Tanka aims to be a fully compatible, drop-in replacement for the main workflow
-of ksonnet
(show
, diff
, apply
).
In general, both tools are very similar when it comes to how they handle Jsonnet -and apply to a Kubernetes cluster.
-However, ksonnet
included a rich code generator for establishing a CLI based
-workflow for editing Kubernetes objects. It also used to manage dependencies
-itself and had a lot of concepts for different levels of abstractions. When
-designing Tanka, we felt these add more complexity for the user than they
-provide additional value. To keep Tanka as minimal as possible, these are not
-available and are not likely to be ever added.
Tanka development has started at the time when kubecfg was a part of
-already-deprecated ksonnet
project. Although these projects are similar, Tanka
-aims to provide continuity for ksonnet
users, whereas kubecfg
is (according
-to the project’s README.md)
-really just a thin Kubernetes-specific wrapper around jsonnet evaluation.
Helm relies heavily on string templating .yaml
files. We feel this is the
-wrong way to approach the absence of abstractions inside of yaml
, because the
-templating part of the application has no idea of the structure and syntax of
-yaml.
This makes debugging very hard. Furthermore, helm
is not able to provide an
-adequate solution for edge cases. If I wanted to set some parameters that are
-not already implemented by the Chart, I have no choice but to modify the Chart
-first.
Jsonnet on the other hand got you covered by supporting mixing (patching, -deep-merging) objects on top of the libraries output if required.
Tanka supports formatting for all jsonnet
and libsonnet
files using the tk fmt
command.
By default, the command excludes all vendor
directories.
Tanka can automatically delete resources from your cluster once you remove them -from Jsonnet.
- -To accomplish this, it appends the tanka.dev/environment: <hash>
label to each created
-resource. This is used to identify those which are missing from the local state in the
-future.
Because the label causes a diff
for every single object in your cluster and
-not everybody wants this, it needs to be explicitly enabled. To do so, add the
-following field to your spec.json
:
Once added, run a tk apply
, make sure the label is actually added and confirm
-by typing yes
.
From now on, you can use tk prune
to remove old resources from your cluster.
The Helm project is the biggest ecosystem of high quality, -well maintained application definitions for Kubernetes.
-Even though Grafana Tanka uses the Jsonnet language for -resource definition, you can still consume Helm resources, as described below.
-Helm support is provided using the
-github.com/grafana/jsonnet-libs/tanka-util
-library. Install it with:
The following example shows how to extract the individual resources of the
-grafana
Helm Chart:
The Chart itself is required to be vendored at a relative
-path, in this case ./charts/grafana
.
Once invoked, the $.grafana
key holds the individual resources of Helm Chart as
-a regular Jsonnet object that looks roughly like so:
Above can be manipulated in the same way as any other Jsonnet data.
-Under the hood, this feature invokes the
-helm template
CLI command.
-The following options control how the command is invoked:
Tanka will install Custom Resource Definitions (CRDs) automatically, if the
-Helm Chart requires them and ships them in crds/
. This is equivalent to helm template --include-crds
. This can be disabled using includeCrds: false
:
Tanka, like Jsonnet, is hermetic. It always yields the same -resources when the project is -strictly self-contained.
-Helm however keeps Charts and repository configuration somewhere around
-~/.config/helm
, which violates above requirement.
To comply with this requirement, Tanka expects Helm Charts to be found inside the
-bounds of a project. This means, you MUST put your Charts somewhere next to
-the file that calls helm.template()
, so that it can be referred to using a
-relative path.
Where to actually put them inside the project is up to you, but keep in mind you -need to refer to them using relative paths.
-We recommend always writing libraries that wrap the actual Helm Chart, so the
-consumer does not need to be aware of it. Whether you put these into your local lib/
directory or
-publish and vendor them into the vendor/
directory is up to you.
A library usually looks like this:
-When adopting Helm inside it, we recommend vendoring at the top level, as such:
-This way, you can refer to the charts as ./charts/<someChart>
from inside
-main.libsonnet
. By keeping the chart as close to the consumer as possible, the
-library is kept portable.
Helm does not make vendoring incredibly easy by itself. helm pull
provides the
-required plumbing, but it does not record its actions in a reproducible manner.
Therefore, Tanka ships a special utility at tk tool charts
, which automates
-helm pull
:
To add a chart, use the following:
- -This will also call tk tool charts vendor
, so that the charts/
directory is updated.
By default, the stable
repository is automatically set up for you. If you wish
-to add another repository, you can use the add-repo
command:
Another way is to modify chartfile.yaml
directly:
If you wish to install multiple versions of the same chart, you can write them to a specific directory.
-You can do so with a :<directory>
suffix in the add
command, or by modifying the chartfile manually.
The resulting chartfile will look like this:
- -To install charts from an existing chartfile, use the following:
- -Optionally, you can also pass the --prune
flag to remove vendored charts that are no longer in the chartfile.
Tanka supports pulling charts from OCI registries. To use one, the chart name must be split into two parts: the registry and the chart name.
-As example, if you wanted to pull the oci://public.ecr.aws/karpenter/karpenter:v0.27.3
image, your chartfile would look like this:
Registry login is not supported yet.
-Helm support in Tanka requires the helm
binary installed on your system and
-available on the $PATH
. If Helm is not installed, you will see this error message:
To solve this, you need to install Helm.
-If you cannot install it system-wide, you can point Tanka at your executable
-using TANKA_HELM_PATH
This occurs, when Tanka was not told where it helm.template()
was invoked
-from. This most likely means you didn’t call new(std.thisFile)
when importing tanka-util
:
Tanka failed to locate your Helm chart on the filesystem. It looked at the
-relative path you provided in helm.template()
, starting from the directory of
-the file you called helm.template()
from.
Please check there is actually a valid Helm chart at this place. Referring to
-charts as <repo>/<name>
is disallowed by design.
To make customization easier, helm.template()
returns the resources not as the
-list it receives from Helm, but instead converts this into an object.
For the indexing key it uses kind_name
by default. In some rare cases, this
-might not be enough to distinguish between two resources, namely when the same
-resource exists in two namespaces.
To handle this, pass a custom name format, e.g. to also include the namespace:
- -The literal default format used is {{ print .kind "_" .metadata.name | snakecase }}
Grafana Tanka is the robust configuration utility for your Kubernetes cluster, powered by the unique Jsonnet language
Clean
The Jsonnet language expresses your Kubernetes apps more clearly than YAML -ever did
Reusable
Build application libraries, import them anywhere and even share them on -GitHub!
Concise
Using the Kubernetes library, you will never see boilerplate again!
Reliable
Stop guessing and use powerful diff to know the exact changes in advance
Production ready
Tanka deploys Grafana Cloud and many more production setups
Open Source
Just like the popular Grafana and Loki projects, Tanka is fully open-source
Inline environments is the practice of defining the environment’s config inline
-for evaluation at runtime as opposed to configuring it statically in
-spec.json
.
The general take away is:
-spec.json
will no longer be usedmain.jsonnet
is expected to render a tanka.dev/Environment
object.data
Converting a traditional spec.json
environment into an inline environment is quite
-straight forward. Based on the example from Using Jsonnet:
The directory structure:
-The original files look like this:
- - -Converting is as simple as bringing in the spec.json
into main.jsonnet
and
-moving the original main.jsonnet
scope into the data:
element.
Even though the apiServer
directive is originally meant to prevent that the
-manifests don’t get accidentally applied to the wrong Kubernetes cluster, there
-is a valid use case for making the apiServer
variable: Local test clusters.
Instead of modifying spec.json
each time, with inline environments it is
-possible to leverage powerful jsonnet concepts, for example with top level
-arguments:
Applying this to a local Kubernetes cluster can be done like this:
- -Similarly this can be used to configure any part of the Environment object, like
-namespace:
, metadata.labels
, …
It is possible to define multiple inline environments in a single jsonnet. This -enables an operator to generate consistent Tanka environments for multiple -Kubernetes clusters.
-We can define a Tanka environment once and then repeat that for a set of -clusters as shown in this example:
- -In the workflow you now have to use --name
to select the environment you want
-to deploy:
For export, it is possible to use the same --name
selector or you can do a
-recursive export while using the --format
option:
import "tk"
Inline environments cannot use import "tk"
anymore as
-this information was populated before jsonnet evaluation by the existence of
-spec.json
.
tk env
The different tk env
subcommands are heavily based on the spec.json
-approach. tk env list
will continue to work as expected, tk env (add|remove|set)
will only work for spec.json
based environments.
Tanka is distributed as a single binary called tk
. It already includes the Jsonnet compiler, but requires some tools to be available:
kubectl
: Tanka
-uses kubectl
to communicate to your cluster. This means kubectl
must be
-available somewhere on your $PATH
. If you ever have worked with Kubernetes
-before, this should be the case anyways.diff
: To compute differences, standard UNIX diff(1)
is required.jb
: #Jsonnet Bundler, the Jsonnet package
-managerhelm
: Helm, required for Helm
-supportOn macOS, Tanka is best installed using brew
:
This downloads the most recent version of Tanka and installs it.
-Also, Tanka is automatically kept up to date as part of brew upgrade
.
We maintain two AUR packages, one building from source and another one using -a pre-compiled binary.
These can be installed using any AUR helper, e.g. yay
:
For all other operating systems, we provide pre-compiled binaries for -Tanka at GitHub Releases.
Just grab the latest version from there, download it and put somewhere in your $PATH
(e.g. to /usr/local/bin/tk
)
For Linux and macOS, download the binary for your architecture, put it somewhere on your $PATH
, and make it an executable:
If you happen to have a local Go toolchain, you can also build Tanka from source using go install
:
If that does not work for whatever reason (Go modules, etc), clone and compile manually:
The Jsonnet Bundler project creates a package manager for Jsonnet
-to share and reuse code across the internet, similar to npm
or go mod
.
Tanka uses this tool by default, so it’s recommended to install it as well:
-On macOS, Jsonnet Bundler is best installed using brew
:
This downloads the most recent version of Jsonnet Bundler and installs it.
-Also, Jsonnet Bundler is automatically kept up to date as part of brew upgrade
.
On ArchLinux, install using the jsonnet-bundler-bin
AUR package:
The jb
binary is primarily distributed using GitHub releases.
For Linux and macOS, download the binary for your architecture, put it somewhere on your $PATH
, and make it an executable:
If you happen to have a local Go toolchain available, you can build from source using go install
:
Releasing a new version of Tanka requires a couple of manual and automated steps. -This guide will give you a runbook on how to do them in the right order.
-main
branch to your local clone.v
(e.g. v0.28.0
).This starts multiple GitHub workflows that will produce binaries, a Docker image, and also a new GitHub release that is marked as draft.
-Once all these actions have finished, go to https://github.com/grafana/tanka/releases and you should see the new draft release. -Click the pencil icon (“edit”) at the top-right and go to the last line of the text body. -Now hit the “Generate release notes” button to add a changelog to the end of the release notes.
-Once you’ve check that the release looks fine (e.g. no broken links, no missing version numbers in the download paths) click the “Publish release” button.
Sometimes it might be required to pass externally acquired data into Jsonnet.
-There are three ways of doing so:
- -Also check out the official Jsonnet docs on this -topic.
-Jsonnet is a superset of JSON, it treats any JSON as valid Jsonnet. Because many -systems can be told to output their data in JSON format, this provides a pretty -good interface between those.
-For example, your build tooling like make
could acquire secrets from systems such as
-Vault, etc. and write that into secrets.json
.
Another way of passing values from the outside are external variables, which are specified like so:
- -They can be accessed using std.extVar
and the name given to them on the command line:
Usually with Tanka, your main.jsonnet
holds an object at the top level (most
-outer type in the generated JSON):
Another type of Jsonnet that naturally accepts parameters is the function
.
-When the Jsonnet compiler finds a function at the top level, it invokes it and
-allows passing parameter values from the command line:
Here, who
needs a value while msg
has a default. This can be invoked like so:
The most important file is called main.jsonnet
, because this is where Tanka
-invokes the Jsonnet compiler. Every single line of Jsonnet, including
-imports, functions and whatnot is then evaluated until a single, very big JSON
-object is left.
-This object is returned to Tanka and includes all of your Kubernetes manifests
-somewhere in it, most probably deeply nested.
But as kubectl
expects a yaml stream, and not a nested tree, Tanka needs to
-extract your objects first. To do this, it traverses the tree until it finds
-something that looks like a Kubernetes manifest. An object is considered valid
-when it has both, kind
and apiVersion
set.
To ensure Tanka can find your manifests, the output of your Jsonnet needs to -have one of the following structures:
-Most commonly used is a single big object that includes all manifests as -leaf-nodes.
-How deeply encapsulated the actual object is does not matter, Tanka will -traverse down until it finds something that is valid.
- -Using this technique has the big benefit that it is self-documenting, as the -nesting of keys can be used to logically group related manifests, for example by -application.
-An encapsulation level of zero is also possible, which means nothing else than
-regular object like it could be obtained from kubectl show -o json
:
Using an array of objects is also fine:
- -Users of kubectl
might have had contact with a type called List
. It is not
-part of the official Kubernetes API but rather a pseudo-type introduced by
-kubectl
for dealing with multiple objects at once. Thus, Tanka does not
-support it out of the box.
To take full advantage of Tankas features, you can manually flatten it:
-Tanka extends Jsonnet using native functions, offering additional functionality not yet available in the standard library.
-To use them in your code, you need to access them using std.native
from the standard library:
std.native
takes the native function’s name as a string
argument and returns a function
, which is called using the second set of parentheses.
sha256
computes the SHA256 sum of the given string.
Evaluating with Tanka results in the JSON:
- -parseJson
parses a json string and returns the respective Jsonnet type (Object
, Array
, etc).
Evaluating with Tanka results in the JSON:
- -parseYaml
wraps yaml.Unmarshal
to convert a string of yaml document(s) into
-a set of dicts. If yaml
only contains a single document, a single value array
-will be returned.
Evaluating with Tanka results in the JSON:
- -manifestJsonFromJson
reserializes JSON and allows to change the indentation.
Evaluating with Tanka results in the JSON:
- -manifestYamlFromJson
serializes a JSON string as a YAML document.
Evaluating with Tanka results in the JSON:
- -escapeStringRegex
escapes all regular expression metacharacters and returns a
-regular expression that matches the literal text.
Evaluating with Tanka results in the JSON:
- -regexMatch
returns whether the given string is matched by the given
-RE2 regular expression.
Evaluating with Tanka results in the JSON:
- -regexSubst
replaces all matches of the re2 regular expression with the
-replacement string.
Evaluating with Tanka results in the JSON:
-Jsonnet is the data templating language Tanka uses for -expressing what shall be deployed to your Kubernetes cluster. Understanding -Jsonnet is crucial to using Tanka effectively.
-This page covers the Jsonnet language itself. For more information on how to -use Jsonnet with Kubernetes, see the tutorial. There’s -also the official Jsonnet tutorial -that provides a more detailed review of language features.
-Being a superset of JSON, the syntax is very similar:
- -Jsonnet has rich abstraction features, which makes it interesting for -configuring Kubernetes, as it allows to keep configurations concise, yet -readable.
- -Just as other languages, Jsonnet allows code to be imported from other files:
- -The exported object (the only non-local one) of secret.libsonnet
is now
-available as a local
variable called secret
.
When using Tanka, it is also possible to directly import .json
files, as if
-they were a .libsonnet
.
Make sure to also take a look at the libraries documentation to learn how to use import
and re-use code.
-The documentation on Tanka import paths and vendoring are useful to understand how imports work in Tanka’s context.
Deep merging allows you to change parts of an object without touching all of it. -Consider the following example:
- -To change the namespace only, we can use the special merge key +:
like so:
The difference between :
and +:
is that the former replaces the original
-data at that key, while the latter applies the new object as a patch on top,
-meaning that values will be updated if possible but all other stay like they
-are.
-To merge those two, just add (+
) the patch to the original:
The output of this is the following JSON object:
- -Jsonnet supports functions, similar to how Python does. They can be defined in -two different ways:
- -Objects can have methods:
- -Default values, keyword-args and more examples can be found at -jsonnet.org.
-The Jsonnet standard library includes many helper methods ranging from object -and array mutation, over string utils to computation helpers.
-Documentation is available at -jsonnet.org.
-Jsonnet supports a conditionals in a fashion similar to a ternary operator:
- -More on jsonnet.org.
-Jsonnet has multiple options to refer to parts of an object:
- -For more information take a look at -jsonnet.org
Below is a list of common errors and how to address them.
-Evaluating jsonnet: RUNTIME ERROR: Undefined external variable: __ksonnet/components
When migrating from ksonnet
, this error might occur, because Tanka does not
-provide the global __ksonnet
variable, nor does it strictly have the concept
-of components.
-You will need to use the plain Jsonnet import
feature instead. Note that this
-requires your code to be inside of one of the
-import paths.
Evaluating jsonnet: RUNTIME ERROR: couldn't open import "k.libsonnet": no match locally or in the Jsonnet library paths
This error can occur when the k8s-libsonnet
kubernetes libraries are missing in the
-import paths. While k8s-libsonnet
used to magically include them, Tanka follows a
-more explicit approach and requires you to install them using jb
:
This does 2 things:
-It installs the k8s-libsonnet
library (in vendor/github.com/jsonnet-libs/k8s-libsonnet/1.21/
).
-You can replace the 1.21
matching the Kubernetes version you want to run against.
It makes an alias for libraries importing k.libsonnet
directly. See
-Aliasing for the alias rationale.
A long-standing bug in kubectl
-results in an incorrect diff output if the same port number is used multiple
-times in differently named ports, which commonly happens if a port is specified
-using both protocols, tcp
and udp
. Nevertheless, tk apply
will still work
-correctly.
Kustomize provides a solution for customizing Kubernetes -manifests in YAML.
-Even though Grafana Tanka uses the Jsonnet language for -resource definition, you can still consume kustomizations, as described below.
- -Kustomize support is provided using the
-github.com/grafana/jsonnet-libs/tanka-util
-library. Install it with:
The following example shows how to extract the individual resources of the
-flux2/source-controller
-kustomization:
Kustomize takes a kustomization manifest as input. Go on an create this file
-flux2/kustomization.yaml
relative to above jsonnet:
Once invoked, the $.source_controller
key holds the individual resources of
-the kustomization as a regular Jsonnet object that looks roughly like so:
Above can be manipulated in the same way as -any other Jsonnet data.
-Tanka, like Jsonnet, is hermetic. It always yields the same resources when -the project is strictly self-contained.
-Kustomize however has the ability to pull -resources -from different sources at runtime, which violates above requirement. This is -also apparent in the example above.
- -Kustomize support in Tanka requires the kustomize
binary installed on your
-system and available on the $PATH
. If Kustomize is not installed, you will see
-this error message:
To solve this, you need to
-install Kustomize.
-If you cannot install it system-wide, you can point Tanka at your executable
-using TANKA_KUSTOMIZE_PATH
This occurs, when Tanka was not told where it kustomize.build()
was invoked
-from. This most likely means you didn’t call new(std.thisFile)
when importing tanka-util
:
Tanka failed to locate your kustomization on the filesystem. It looked at the
-relative path you provided in kustomize.build()
, starting from the directory
-of the file you called kustomize.build()
from.
Please check there is actually a valid kustomization at this place.
When using import
or importstr
, Tanka considers the following directories to
-find a suitable file for that specific import:
Rank | Path | Purpose |
---|---|---|
4 | <baseDir> | The directory of your environment, e.g. /environments/default .Put things that belong to this very environment here. |
3 | /lib | Project-global libraries, that are used in multiple environments, but are specific to this project. |
2 | <baseDir>/vendor | Per-environment vendor, can be used for vendor overriding |
1 | /vendor | Global vendor, holds external libraries installed using jb . |
The tool for dealing with libraries is
-jsonnet-bundler
. It can
-install packages from any git source using ssh
and GitHub over https
.
To install a library from GitHub, use one of the following:
- -Otherwise, use the ssh syntax:
- - -Publishing is as easy as committing and pushing to a git remote. -GitHub is recommended, as it is most common and supports -faster installing using http archives.
The vendor
directory is immutable in its nature. You can’t and should never
-modify any files inside of it, jb
will revert those changes on the next run anyway.
Nevertheless, it can sometimes become required to add changes there, e.g. if an -upstream library contains a bug that needs to be fixed immediately, without -waiting for the upstream maintainer to review it.
-Because import paths are ranked in Tanka, you can use
-a technique called shadowing: By putting a file with the exact same name in a
-higher ranked path, Tanka will prefer that file instead of the original in
-vendor
, which has the lowest possible rank of 1.
For example, if /vendor/foo/bar.libsonnet
contained an error, you could create
-/lib/foo/bar.libsonnet
and fix it there.
Another common case is overriding the entire vendor
bundle per environment.
This is handy, when you for example want to test a change of an upstream
-library which is used in many environments (including prod
) in a single one,
-without affecting all the others.
For this, Tanka lets you have a separate vendor
, jsonnetfile.json
and
-jsonnetfile.lock.json
per environment. To do so:
tkrc.yaml
Tanka normally uses the jsonnetfile.json
from your project to find its root.
-As we are going to create another one of that down the tree in the next step, we
-need another marker for <rootDir>
.
For that, create an empty file called tkrc.yaml
in your project’s root,
-alongside the original jsonnetfile.json
.
vendor
to your environmentIn your environments folder (e.g. /environments/default
):
When using Tanka, namespaces are handled slightly different compared to
-kubectl
, because environments offer more granular control than contexts used
-by kubectl
.
In the spec.json
of each environment, you can set the
-spec.namespace
field, which is the default namespace. The default namespace is
-set for every resource that does not have a namespace set from Jsonnet.
Scenario | Action | |
---|---|---|
1. | Your resource lacks namespace information (metadata.namespace ) unset or "" | Tanka sets metadata.namespace to the value of spec.namespace in spec.json |
2. | Your resource already has namespace information | Tanka does nothing, accepting the explicit namespace |
While we recommend keeping environments limited to a single namespace, there are -legit cases where it’s handy to have them span multiple namespaces, for example:
-Some resources in Kubernetes are cluster-wide, meaning they don’t belong to a single namespace at all.
-Tanka will make an attempt to not add namespaces to known cluster-wide types. -It does this with a short list of types in the source code.
-Tanka cannot feasibly maintain this list for all known custom resource types. In those cases, resources will have namespaces added to their manifests, -and kubectl should happily apply them as non-namespaced resources.
-If this presents a problem for your workflow, you can override this behavior
-per-resource, by setting the tanka.dev/namespaced
annotation to "false"
-(must be of string
type):
When a project becomes bigger over time and includes a lot of Kubernetes -objects, it may become required to operate on only a subset of them (e.g. apply -only a part of an application).
-Tanka helps you with this, by allowing you to limit the used objects on the
-command line using the --target
flag. Say you are deploying an nginx
-instance with a special nginx.conf
and want to apply the ConfigMap
first:
The syntax of the --target
/ -t
flag is --target=<kind>/<name>
. If
-multiple objects match this pattern, all of them are used.
The --target
/ -t
flag can be specified multiple times, to work with
-multiple objects.
The argument passed to the --target
flag is interpreted as a
-RE2 regular expression.
This allows you to use all sorts of wildcards and other advanced matching -functionality to select Kubernetes objects:
- -When using regular expressions, there are some things to watch out for:
-Tanka automatically surrounds your regular expression with line anchors:
- -For example, --target 'deployment/.*'
becomes ^deployment/.*$
.
Regular expressions may consist of characters that have special meanings in -shell. Always make sure to properly quote your regular expression using single -quotes.
- -Sometimes it may be desirably to exclude a single object, instead of including all others.
-To do so, prepend the regular expression with an exclamation mark (!
), like so:
${i(30)}
-${i(40)}
-Tanka supports -server-side apply, -which requires at least Kubernetes 1.16+, and was promoted to stable status in 1.22.
-To enable server-side diff in tanka, add the following field to spec.json
:
This also has the effect of changing the default diff strategy
-to server
, but this can be overridden via command line flags or spec.json
.
While server-side apply doesn’t have any effect on the resources being applied -and is intended to be a general in-place upgrade to client-side apply, there are -differences in how fields are managed that can make converting existing cluster -resources a non-trival change.
-Identifying and fixing these changes are beyond the scope of this guide, but
-many can be found before an apply by using the validate
or server
-diff strategy.
As part of the changes, you may encounter error messages which
-recommend the use of the --force-conflicts
flag. Using tk apply --force
-in server-side mode will enable that flag for kubectl instead of
-kubectl --force
, which no longer has any effect in server-side mode.
While we won’t need to touch the resource definitions directly that frequently
-anymore now that our deployments definitions are parametrized, the
-main.jsonnet
file is still very long and hard to read. Especially because of
-all the brackets, it’s even worse than yaml at the moment.
Let’s start cleaning this up by separating logical pieces into distinct files:
-main.jsonnet
: Still our main file, importing the other filesgrafana.libsonnet
: Deployment
and Service
for the Grafana instanceprometheus.libsonnet
: Deployment
and Service
for the Prometheus serverThe file should contain an object with the same function that was defined under the grafana
in /environments/default/main.jsonnet
, but called new
instead of grafana
.
-Do the same for /environments/default/prometheus.libsonnet
as well.
While main.jsonnet
is now short and very readable, the other two files are not
-really an improvement over regular yaml, mostly because they are still full of
-boilerplate.
Let’s use functions to create some useful helpers to reduce the amount of
-repetition. For that, we create a new file called kubernetes.libsonnet
, which
-will hold our Kubernetes utilities.
Creating a Deployment
requires some mandatory information and a lot of
-boilerplate. A function that creates one could look like this:
Invoking this function will substitute all the variables with the respective -passed function parameters and return the assembled object.
-Let’s simplify our grafana.libsonnet
a bit:
This drastically simplified the creation of the Deployment
, because we do not
-need to remember how exactly a Deployment
is structured anymore. Just use
-our helper and you are good to go.
At this point, our configuration is already flexible and concise, but not -really reusable. Let’s take a look at Tanka’s third buzzword as well: Environments.
-These days, the same piece of software is usually deployed many times inside a
-single organization. This could be dev
, testing
and prod
environments, but
-also regions (europe
, us
, asia
) or individual customers (foo-corp
,
-bar-gmbh
, baz-inc
).
Most of the application however is exactly the same across those environments …
-usually only configuration, scaling or small details are different after all.
-YAML (and thus kubectl
) provides us only one solution here: Duplicating the
-directory, changing the details, maintaining both. But what if you have 32
-environments? Correct! Then you have to maintain 32 directories of YAML. And we can all
-imagine the nightmare of these files drifting apart from each other.
But again, Jsonnet can be the solution: By extracting the actual objects -into a library, you can import them in as many environments as you need!
-A library is nothing special, just a folder of .libsonnet
files somewhere in the import paths:
Path | Description |
---|---|
/lib | Custom, user-created libraries only for this project. |
/vendor | External libraries installed using Jsonnet-bundler |
So for our purpose /lib
fits best, as we are only creating it for our current
-project. Let’s set one up:
For documentation purposes it is handy to have a separate file for parameters and used images:
- - -So far we have only used the environments/default
environment. Let’s create some real ones:
All that’s left now is importing the library and configuring it. For dev
, the defaults defined in /lib/prom-grafana/config.libsonnet
should be sufficient, so we do not override anything:
For prod
however, it is a bad idea to rely on latest
for the images .. let’s
-add some proper tags:
The above works well for libraries we control ourselves, but what when another
-team wrote the library, it was installed using jb
from GitHub or you can’t
-change it easily?
Here comes the already familiar +:
(or +::
) syntax into play. It allows to
-partially override values of an object. Let’s say we wanted to add some labels to the Prometheus Deployment
, but our _config
params don’t allow us to. We can still do this in our main.jsonnet
:
By using the +:
operator all the time and only foo: "bar"
uses “:
”, we only
-override the value of "foo"
, while leaving the rest of the object like it was.
Let’s check it worked:
- -The most powerful piece of Tanka is the Jsonnet data templating -language. Jsonnet is a superset of JSON, adding variables, -functions, patching (deep merging), arithmetic, conditionals and many more to -it.
-It has a lot in common with more real programming languages such as JavaScript -than with markup languages, still it is tailored specifically to representing -data and configuration. As opposed to JSON (and YAML) it is a language meant for -humans, not for computers.
-To get started with Tanka and Jsonnet, let’s initiate a new project, in which we will install both Prometheus and Grafana into our Kubernetes cluster:
- -This gives us the following directory structure:
-For the moment, we only really care about the environments/default
folder. The
-purpose of the other directories will be explained later in this guide (mostly
-related to libraries).
When using Tanka, you apply configuration for an Environment to a -Kubernetes cluster. An Environment is some logical group of pieces that form -an application stack.
-Grafana Labs for example runs Loki,
-Cortex and of course
-Grafana for our Grafana
-Cloud hosted offering. For each of these, we have a
-separate environment. Furthermore, we like to see changes to our code in
-separate dev
setups to make sure they are all good for production usage – so
-we have dev
and prod
environments for each app as well, as prod
-environments usually require other configuration (secrets, scale, etc) than
-dev
. This roughly leaves us with the following:
Environment | Loki | Cortex | Grafana |
---|---|---|---|
prod | Name: /environments/loki/prod Namespace: loki-prod | Name: /environments/cortex/prod Namespace: cortex-prod | Name: /environments/grafana/prod Namespace: grafana-prod |
dev | Name: /environments/loki/dev Namespace: loki-dev | Name: /environments/cortex/dev Namespace: cortex-dev | Name: /environments/grafana/dev Namespace: grafana-dev |
There is no limit in Environment complexity, create as many as you need to model -your own requirements. Grafana Labs for example also has all of these multiplied per -high-availability region.
-To get started, a single environment is enough. Lets use the automatically
-created environnments/default
for that.
While kubectl
loads all .yaml
files in a certain folder, Tanka has a single
-file that serves as the canonical source for all contents of an environment,
-called main.jsonnet
. This is just like Go has the main.go
or C++ the
-main.cpp
.
Similar to JSON, each .jsonnet
file holds a single object. The one returned by
-main.jsonnet
will hold all of your Kubernetes resources:
They may be deeply nested, Tanka extracts everything that looks like a -Kubernetes resource automatically.
-So let’s rewrite the previous .yaml
to
-very basic .jsonnet
:
At the moment, this is even more verbose because we have effectively converted -YAML to JSON, which requires more characters by design.
-But Jsonnet opens up enough possibilities to improve this a lot, which will be -covered in the next sections.
-So far so good, but can we make sure Tanka correctly finds our resources? We
-can! By running tk show
you can see the good old yaml, just as kubectl
-receives it:
Spend some time here and try to identify resources from the output in the
-.jsonnet
source.
The YAML looks as expected? Let’s apply it to the cluster. To do so, Tanka needs -some additional configuration.
-While kubectl
uses a $KUBECONFIG
environment variable and a file in the home
-directory to store the currently selected cluster, Tanka takes a more explicit
-approach:
Each environment has a file called spec.json
, which includes the information
-to select a cluster:
You still have to setup a cluster in $KUBECONFIG
that matches this IP – Tanka
-will automatically find and use it. This also means that all of your kubectl
-clusters just work.
This allows us to make sure that you will never accidentally apply to the wrong -cluster.
- -Before applying to the cluster, Tanka gives you a chance to check that your
-changes actually behave as expected: tk diff
works just like git diff
– you
-see what will be changed.
As you can see, it shows everything as to-be created .. just as we’d expect, -since we are using a blank namespace.
- -Once it’s all looking good, tk apply
serves the exact same purpose as kubectl apply
:
It shows you the diff first and the chosen cluster once more and requires
-interactive approval (type yes
).
After that, kubectl
is used to apply to the cluster. By piping to
-kubectl
Tanka makes sure it behaves exactly as you would expect it. No
-edge-cases of differing Kubernetes client implementations should ever occur.
Again, let’s connect to Grafana:
- -And go to http://localhost:8080 for Grafana’s UI.
The last section has shown that using a library for creating Kubernetes objects -can drastically simplify the code you need to write. However, there is a huge -amount of different kinds of objects and the Kubernetes API is evolving (and -thus changing) quite rapidly.
-Writing and maintaining such a library could be a full-time job on it’s own. -Luckily, it is possible to generate such a library from the Kubernetes OpenAPI -specification! Even better, it has already been done for you.
-The library is called k8s-libsonnet
(replacing the discontinued ksonnet-lib
),
-currently available at https://github.com/jsonnet-libs/k8s-libsonnet.
As k8s-libsonnet
has broken compatibility in a few places with ksonnet-lib
(for good
-reason), we have instrumented the widely used ksonnet-util
library with a
-compatibility layer to improve the developer and user experience:
-https://github.com/grafana/jsonnet-libs/tree/master/ksonnet-util
If you do not have any strong reasons against it, just adopt the wrapper as
-well, it will ease your work. Many of the original ksonnet-util
enhancements
-have already made their way into k8s-libsonnet
.
The docs for k8s-libsonnet
library can be found here:
-https://jsonnet-libs.github.io/k8s-libsonnet/
Like every other external library, k8s-libsonnet
can be installed using
-jsonnet-bundler
.
-However, Tanka already did this for you during project
-creation (tk init
):
This created the following structure in /vendor
:
Because of how jb
works, the library can be imported as
-github.com/jsonnet-libs/k8s-libsonnet/1.21/main.libsonnet
. Most external
-libraries (including our wrapper) expect it as a simple k.libsonnet
(without
-the package prefix).
To support both, Tanka automatically created an alias file for you:
-/lib/k.libsonnet
that just imports the actual library, exposing it under this
-alternative name as well.
First we need to import it in main.jsonnet
:
Now that we have installed the correct version, let’s use it in
-/environments/default/grafana.libsonnet
instead of our own helper:
Now that creating the individual objects does not take more than 5 lines, we can
-merge it all back into a single file (main.jsonnet
) and take a look at the
-whole picture:
That’s a pretty big improvement, considering how verbose and error-prone it was -before!
-While this is already a huge improvement, we can do a bit more. There is still some repetition in the main.jsonnet
file.
-The most straightforward way to address this is by creating a hidden object that holds all actual values in a single place to be consumed by the actual resources.
Luckily, Jsonnet has the key:: "value"
stanza for private fields. Such are only available during compiling and will be removed from the actual output.
Such an object could look like this:
- -We can then replace hardcoded values with a reference to this object:
-Welcome to the Tanka tutorial!
-The following sections will explain how to deploy an example stack,
-(Grafana and
-Prometheus), to Kubernetes. We will also deal with parameters, differences between dev
and prod
and how to stop worrying and love libraries.
To do so, we have the following steps:
-kubectl
to understand what Tanka will do for us.dev
and prod
.k.libsonnet
: Avoid having to remember API resources.Completing this gives a solid knowledge of Tanka’s fundamentals. Let’s get started!
-Deploying using Tanka worked well, but it did not really improve the situation -in terms of maintainability and readability.
-To do so, the following sections will explore some ways Jsonnet provides us with.
-Defining our deployment in a single block is not the best solution. -Luckily with Jsonnet we can split our configuration into smaller, self-contained chunks.
-Let’s start by creating a new function in main.jsonnet
responsible for creating a Grafana deployment:
and let’s use it in our main configuration:
- -We can then replace hardcoded values by adding parameters to our function:
- -and update the usage accordingly:
- - -Now we do not only have a single place to change tunables, but also won’t suffer -from mismatching labels and selectors anymore, as they are defined in a single -place and all changed at once.
-To understand how Tanka works, it is important to know what steps are required -for the task of deploying Grafana and Prometheus to Kubernetes:
-Deployment
must be created, to run the prom/prometheus
imageService
is needed for Grafana to be able to connect port 9090
of
-Prometheus.Deployment
is required for the Grafana server.Service
of type
-NodePort
.Before taking a look how Tanka can help doing so, let’s recall how to do it with
-plain kubectl
.
kubectl
expects the resources it should create in .yaml
format.
-For Grafana:
and for Prometheus:
- -That’s pretty verbose, right?
-Even worse, there are labels and matchers (e.g. prometheus
) that need to be
-exactly the same scattered across the file. It’s a nightmare to debug and
-furthermore harms readability a lot.
To actually apply those resources, copy them into .yaml
files and use:
So far so good, but can we tell it actually did what we wanted? Let’s test that -Grafana can connect to Prometheus!
- -Now go to http://localhost:8080 in your browser and login using admin:admin
.
-Then navigate to Connections > Data sources > Add new data source
, choose
-Prometheus
as type and enter http://prometheus:9090
as URL. Hit
-Save & Test
which should yield a big green bar telling you everything is good.
Cool! This worked out well for this small example, but the .yaml
files are
-hard to read and maintain. Especially when you need to deploy this exact same
-thing in dev
and prod
your choices are very limited.
Let’s explore how Tanka can help us here in the next section!
-Let’s remove everything we created to start fresh with Jsonnet in the next section:
-