At the core of Kapitan lies the Inventory, a structured database of variables meticulously organized in YAML files.
-This hierarchical setup serves as the single source of truth (SSOT) for your system's configurations, making it easier to manage and reference the essential components of your infrastructure. Whether you're dealing with Kubernetes configurations, Terraform resources, or even business logic, the Inventory allows you to define and store these elements efficiently. This central repository then feeds into Kapitan's templating engines, enabling seamless reuse across various applications and services.
-
-
+
At the core of Kapitan lies the Inventory, a structured database of variables meticulously organized in YAML files.
+
This hierarchical setup serves as the single source of truth (SSOT) for your system's configurations, making it easier to manage and reference the essential components of your infrastructure.
+
Whether you're dealing with Kubernetes configurations, Terraform resources, or even business logic, the Inventory allows you to define and store these elements efficiently. This central repository then feeds into Kapitan's templating engines, enabling seamless reuse across various applications and services.
Kapitan takes the information stored in the Inventory and brings it to life through its templating engines upon compilation. This process transforms static data into dynamic configurations, capable of generating a wide array of outputs like Kubernetes manifests, Terraform plans, documentation, and scripts. It's about making your configurations work for you, tailored to the specific needs of your projects.
+
Kapitan takes the information stored in the Inventory and brings it to life through its templating engines upon compilation. Some of the supported input types are: Python, Jinja2, Jsonnet, Helm, and we're adding more soon.
+
This process transforms static data into dynamic configurations, capable of generating a wide array of outputs like Kubernetes manifests, Terraform plans, documentation, and scripts.
+
It's about making your configurations work for you, tailored to the specific needs of your projects.
Generators offer a straightforward entry point into using Kapitan, requiring minimal to no coding experience. These are essentially pre-made templates that allow you to generate common configuration files, such as Kubernetes manifests, directly from your Inventory data. Kapitan provides a wealth of resources, including the Kapitan Reference GitHub repository and various blog posts, to help users get up and running with generators.
+
Generators offer a straightforward entry point into using Kapitan, requiring minimal to no coding experience.
+
These are essentially pre-made templates that allow you to generate common configuration files, such as Kubernetes manifests, directly from your Inventory data.
+
Kapitan provides a wealth of resources, including the Kapitan Reference GitHub repository and various blog posts, to help users get up and running with generators.
For those looking to leverage the full power of Kapitan, Kadet introduces a method to define and reuse complex configurations through Python. This internal library facilitates the creation of JSON and YAML manifests programmatically, offering a higher degree of customization and reuse. Kadet empowers users to craft intricate configurations with the simplicity and flexibility of Python.
+
For those looking to leverage the full power of Kapitan, Kadet introduces a method to define and reuse complex configurations through Python.
+
This internal library facilitates the creation of JSON and YAML manifests programmatically, offering a higher degree of customization and reuse. Kadet empowers users to craft intricate configurations with the simplicity and flexibility of Python.
KapitanReferences provide a secure way to store passwords, settings, and other essential data within your project. Think of them as special code placeholders.
The Inventory is a core component of Kapitan: this section aims to explain how it works and how to best take advantage of it.
The Inventory is a hierarchical YAML based structure which you use to capture anything that you want to make available to Kapitan, so that it can be passed on to its templating engines.
-
The first concept to learn about the Inventory is the target. A target is a file, found under the inventory/targets substructure, that tells Kapitan what you want to compile. It will usually map to something you want to do with Kapitan.
-
For instance, you might want to define a target for each environment that you want to deploy using Kapitan.
-
The Inventory lets you also define and reuse common configurations through YAML files that are referred to as classes: by listing classes into target, their content gets merged together and allows you to compose complex configurations without repetitions.
-
By combining target and classes, the Inventory becomes the SSOT for your whole configuration, and learning how to use it will unleash the real power of Kapitan.
-
-
Info
-
The KapitanInventory is based on an open source project called reclass and you can find the full documentation on our Github clone. However we discourage you to look directly at the reclass documentation before you learn more about Kapitan, because Kapitan uses a fork of reclass and greatly simplifies the reclass experience.
-
-
-
Info
-
Kapitan allows users to switch the inventory backend to reclass-rs. You can switch the backend to reclass-rs by passing --inventory-backend=reclass-rs on the command line. Alternatively, you can define the backend in the .kapitan config file.
Kapitan enforces very little structure for the Inventory, so that you can adapt it to your specific needs: this might be overwhelming at the beginning: don’t worry, we will explain best practice and give guidelines soon.
Classes are found by default under the inventory/classes directory and define common settings and data that you define once and can be included in other files. This promotes consistency and reduces duplication.
+
Classes are identified with a name that maps to the directory structure they are nested under.
+In this example, the kapicorp.common class represented by the file classes/kapicorp/common.yml
Targets are found by default under the inventory/targets directory and represent the different environments or components you want to manage. Each target is a YAML file that defines a set of configurations.
+
For example, you might have targets for production, staging, and development environments.
By combining target and classes, the Inventory becomes the SSOT for your whole configuration, and learning how to use it will unleash the real power of Kapitan.
A target is a file that lives under the inventory/targets subdirectory, and that tells Kapitan what you want it to do for you.
-
Kapitan will recognise all YAML files in the inventory/targets subtree as targets.
-
-
Note
-
Only use .yml as extension for Inventory files. .yaml will not be recognised as a valid Inventory file.
-
-
What you do with a target is largely up to you and your setup. Common examples:
-
-
clusters: Map each target to a cluster, capturing all configurations needed for a given cluster. For instance: targets/clusters/production-cluster1.yml
-
applications: When using Kapitan to manage Kubernetes applications, you might define a target for everything that you would normally deploy in a single namespace, including all its resources, scripts, secrets and documentation. For instance: targets/mysql.yml
-
environments: You might have want to define a different target for each environment you have, like dev.yml, test.yml and prod.yml
-
cloud projects: When working with Terraform, it may be convenient to group target by cloud project. For instance: targets/gcp/projects/engineering-prod.yml.
-
single tenancy: When deploying a single-tenancy application, you might combine the approaches above, and have a targetacme.yml that is used to define both Terraform and Kubernetes resources for a given tenant, perhaps also with some ArgoCD or Spinnaker pipelines to go with it.
-
-
-
Example
-
If you have configured your kapitan repository like in Quick Start instructions, you can run the commands we give during the course of this documentation.
When you run kapitan compile, you instruct Kapitan to generate for each given target a directory under compiled with the same name. Under this directory you will find all the files that have been generated by Kapitan for that target.
In the world of Kapitan, a target represents a specific environment or deployment scenario where you want to apply your configurations.
+
Think of it as a blueprint that defines all the necessary settings and parameters for a particular deployment.
+
For instance, you might have separate targets for production, staging, and development environments, each with its own unique configurations.
+
Defining targets
+
Targets are defined as YAML files within the inventory/targets/ directory of your Kapitan project. Each target file typically includes:
+
classes:
+ # A list of classes to inherit configurations from.
+ # This allows you to reuse common settings and avoid repetition
+ -
+
+parameters:
+ # file parameters that override or extend the parameters inherited from previously loaded classes
classes is a list of class files you will want to import.
-
parameters allows for local override of what is unique to this target.
+
Inherits configurations from the common and components.nginx classes.
+
Sets the environment parameter to production.
+
Overrides (if defined) the replicas for the nginx component to 3.
+
Defines the namespace as production using variable interpolation.
+
Creates a dynamic description based on the content of the environment variable.
-
-
Info
-
the kapitan key under the root parameters is reserved for kapitan usage. Some examples:
-
parameters:
-kapitan:
-compile:# input types configuration section
-dependencies:# dependencies configuration section to download resources
-secrets:# secret encryption/decryption configuration section
-validate:# items which indicate which compiled output to validate
-vars:# which are also passed down to input types as context
+
Compiling targets
+
When you run kapitan compile -t , Kapitan:
+
+
Reads the target file: Kapitan parses the YAML file for the specified target.
+
Merges configurations: It merges the parameters from the included classes with the target-specific parameters, giving priority to the target's values.
+
Generates output in compiled/target/path/targetname: It uses this merged configuration data, along with the input types and generators, to create the final configuration files for the target environment.
+
+
When you run kapitan without the selector, it will run compile for all targets it discovers under the inventory/targets subdirectory.
+
Target directory structure
+
Targets are not limited to living directly within the inventory/targets directory.
+
They can be organized into subdirectories to create a more structured and hierarchical inventory. This is particularly useful for managing large and complex projects.
+
When targets are organized in subdirectories, Kapitan uses the full path from the targets/ directory to create the target name. This name is then used to identify the target during compilation and in the generated output.
In this example, the my-cluster.yml target file is located within the clusters/production/ subdirectory, and can be identified with clusters.production.my-cluster.
diff --git a/dev/search/search_index.json b/dev/search/search_index.json
index ad3035eb9..2ef20b841 100644
--- a/dev/search/search_index.json
+++ b/dev/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Kapitan: Keep your ship together","text":"
Kapitan aims to be your one-stop configuration management solution to help you manage the ever growing complexity of your configurations by enabling Platform Engineering and GitOps workflows.
It streamlines complex deployments across heterogeneous environments while providing a secure and adaptable framework for managing infrastructure configurations. Kapitan's inventory-driven model, powerful templating capabilities, and native secret management tools offer granular control, fostering consistency, reducing errors, and safeguarding sensitive data.
Empower your team to make changes to your infrastructure whilst maintaining full control, with a GitOps approach and full transparency.
Join the community #kapitan
Help us grow: give us a star or even better sponsor our project
"},{"location":"#why-do-i-need-kapitan","title":"Why do I need Kapitan?","text":""},{"location":"#video-tutorials-to-get-started","title":"Video Tutorials to get started","text":"
Kapitan Youtube Channel
InventoryReferencesHelm and Generators integrationRawkode: Introduction to Kapitan "},{"location":"ADOPTERS/","title":"Who uses Kapitan","text":"
If you're using Kapitan in your organization, please let us know by adding to this list on the docs/ADOPTERS.md file.
"},{"location":"FAQ/","title":"FAQ","text":""},{"location":"FAQ/#why-do-i-need-kapitan","title":"Why do I need Kapitan?","text":"
See Why do I need Kapitan?
"},{"location":"FAQ/#ask-your-question","title":"Ask your question","text":"
Please use the comments facility below to ask your question
"},{"location":"getting_started/","title":"Kapitan Overview","text":""},{"location":"getting_started/#setup-your-installation","title":"Setup your installation","text":"
Using our reference repositories you can easily get started with Kapitan
kapicorp/kapitan-reference repository is meant show you many working examples of things you can do with Kapitan. You can use this to get familiar with Kapitan
"},{"location":"getting_started/#install-kapitan-using-pip","title":"Install Kapitan using pip","text":""},{"location":"getting_started/#user","title":"User","text":"LinuxMac
kapitan will be installed in $HOME/.local/lib/python3.7/bin
pip3 install --user --upgrade kapitan\n
kapitan will be installed in $HOME/Library/Python/3.7/bin
Proposals can be submitted for review by performing a pull request against this repository. If approved the proposal will be published here for further review by the Kapitan community. Proposals tend to be improvements or design consideration for new features.
One of the motivations behing Kapitan's design is that we believe that everything about your setup should be tracked, and Kapitan takes this to the extreme. Sometimes, however, we have to manage values that we do not think they belong to the Inventory: perhaps they are either too variable (for instance, a Git commit sha that changes with every build) or too sensitive, like a password or a generic secret, and then they should always be encrypted.
Kapitan has a built in support for References, which you can use to manage both these use cases.
Kapitan References supports the following backends:
Backend Description Encrypted plain Plain text, (e.g. commit sha) base64 Base64, non confidential but with base64 encoding gpg Support for https://gnupg.org/ gkms GCP KMS awskms AWS KMS azkms Azure Key Vault env Environment vaultkv Hashicorp Vault (RO) vaulttransit Hashicorp Vault (encrypt, decrypt, update_key, rotate_key)"},{"location":"references/#setup","title":"Setup","text":"
Some reference backends require configuration, both in the Inventory and to configure the actual backend.
Get started
If you want to get started with references but don't want to deal with the initial setup, you can use the plain and base64 reference types. These are great for demos, but we will see they are extremely helpful even in Production environments.
Danger
Both plain and base64 references do not support encryption: they are intended for development or demo purposes only. DO NOT use plain or base64 for storing sensitive information!
Backend configuration
Configuration for each backend varies, and it is perfomed by configuring the inventory under parameters.kapitan.secrets.
Now because they both set the parameters.backend variable, you can define a reference whose backend changes based on what class is assigned to the target
inventory/targets/cloud/gcp/acme.yml
classes:\n- cloud.aws\n\nparameters:\n ...\n mysql:\n # the secret backend will change based on the cloud assigned to this target\n root_password: ?{${backend}:targets/${target_name}/mysql/root_password}\n ...\n
The env backend works in a slightly different ways, as it allows you to reference environment variables at runtime.
For example, for a reference called {?env:targets/envs_defaults/mysql_port_${target_name}}, Kapitan would look for an environment variable called KAPITAN_ENV_mysql_port_${TARGET_NAME}.
If that variable cannot be found in the Kapitan environment, the default will be taken from the refs/targets/envs_defaults/mysql_port_${TARGET_NAME} file instead.
Kapitan has built in capabilities to initialise its references on creation, using an elegant combination of primary and secondary functions. This is extremely powerful because it allows for you to make sure they are always initialised with sensible values.
To automate the creation of the reference, you can add one of the following primary functions to the reference tag by using the syntax ||primary_function:param1:param2
For instance, to automatically initialise a reference with a random string with a lenght of 32 characters, you can use the random primary function
The first operator here || is more similar to a logical OR.
If the reference file does not exist, Kapitan will use the function to initialise it
If the reference file exists, no functions will run.
Automate secret rotation with ease
You can take advantage of it to implement easy rotation of secrets. Simply delete the reference files, and run kapitan compile: let Kapitan do the rest.
If you use reveal to initialise a reference, like my_reference||reveal:source_reference the my_reference will not be automatically updated if source_reference changes. Please make sure you also re-initialise my_reference correctly
"},{"location":"references/#using-subvars-to-ingest-yaml-from-command-line-tools","title":"Using subvars to ingest yaml from command line tools","text":"
Subvars can have a very practical use for storing YAML outputs coming straight from other tools. For instance, I could use the GCP gcloud command to get all the information about a cluster, and write it into a reference
Combined with a Jinja template, I could write automatically documentation containing the details of the clusters I use.
```text\n{% set p = inventory.parameters %}\n# Documentation for {{p.target_name}}\n\nCluster [{{p.cluster.name}}]({{p.cluster.link}}) has release channel {{p.cluster.release_channel}}\n```\n
Considering a key-value pair like my_key:my_secret in the path secret/foo/bar in a kv-v2(KV version 2) secret engine on the vault server, to use this as a secret use:
Leave mount empty to use the specified mount from vault params from the inventory (see below). Same applies to the path/in/vault where the ref path in kapitan gets taken as default value.
Parameters in the secret file are collected from the inventory of the target we gave from CLI -t <target_name>. If target isn't provided then kapitan will identify the variables from the environment when revealing secret.
The environment variables which can also be defined in kapitan inventory are VAULT_ADDR, VAULT_NAMESPACE, VAULT_SKIP_VERIFY, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY, VAULT_CAPATH & VAULT_CACERT. Note that providing these variables through the inventory in envvar style is deprecated. Users should update their inventory to set these values in keys without the VAULT_ prefix and in all lowercase. For example VAULT_ADDR: https://127.0.0.1:8200 should be given as addr: https://127.0.0.1:8200 in the inventory. Please note that configuring one of these values in both kapitan.secrets.vaultkv in the inventory and in the environment will cause a validation error.
Extra parameters that can be defined in inventory are:
auth: specify which authentication method to use like token,userpass,ldap,github & approle
mount: specify the mount point of key's path. e.g if path=alpha-secret/foo/bar then mount: alpha-secret (default secret)
engine: secret engine used, either kv-v2 or kv (default kv-v2)
The environment variables which cannot be defined in inventory are VAULT_TOKEN,VAULT_USERNAME,VAULT_PASSWORD,VAULT_ROLE_ID,VAULT_SECRET_ID.
Considering a key-value pair like my_key:my_secret in the path secret/foo/bar in a transit secret engine on the vault server, to use this as a secret use:
Parameters in the secret file are collected from the inventory of the target we gave from CLI -t <target_name>. If target isn't provided then kapitan will identify the variables from the environment when revealing secret.
Environment variables that can be defined in kapitan inventory are VAULT_ADDR, VAULT_NAMESPACE, VAULT_SKIP_VERIFY, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY, VAULT_CAPATH & VAULT_CACERT. Extra parameters that can be defined in inventory are:
auth: specify which authentication method to use like token,userpass,ldap,github & approle
mount: specify the mount point of key's path. e.g if path=my_mount (default transit)
crypto_key: Name of the encryption key defined in vault
always_latest: Always rewrap ciphertext to latest rotated crypto_key version Environment variables cannot be defined in inventory are VAULT_TOKEN,VAULT_USERNAME,VAULT_PASSWORD,VAULT_ROLE_ID,VAULT_SECRET_ID.
To encrypt secrets using keys stored in Azure's Key Vault, a key_id is required to identify an Azure key object uniquely. It should be of the form https://{keyvault-name}.vault.azure.net/{object-type}/{object-name}/{object-version}.
"},{"location":"references/#defining-the-kms-key","title":"Defining the KMS key","text":"
This is done in the inventory under parameters.kapitan.secrets.
This introduces a new experimental input type called Kadet.
Kadet is essentially a Python module offering a set of classes and functions to define objects which will compile to JSON or YAML. A complete example is available in examples/kubernetes/components/nginx.
BaseObj implements the basic object implementation that compiles into JSON or YAML. Setting keys in self.root means they will be in the compiled output. Keys can be set as an hierarchy of attributes (courtesy of addict) The self.body() method is reserved for setting self.root on instantiation:
The self.new() method can be used to define a basic constructor. self.need() checks if a key is set and errors if it isn't (with an optional custom error message). kwargs that are passed onto a new instance of BaseObj are always accessible via self.kwargs In this example, MyApp needs name and foo to be passed as kwargs.
class MyApp(BaseObj):\n def new(self):\n self.need(\"name\")\n self.need(\"foo\", msg=\"please provide a value for foo\")\n\n def body(self):\n self.root.name = self.kwargs.name\n self.root.inner.foo = self.kwargs.foo\n self.root.list = [1, 2, 3]\n\nobj = MyApp(name=\"myapp\", foo=\"bar\")\n
","tags":["kubernetes","kadet"]},{"location":"kap_proposals/kap_0_kadet/#setting-a-skeleton","title":"Setting a skeleton","text":"
Defining a large body with Python can be quite hard and repetitive to read and write. The self.update_root() method allows importing a YAML/JSON file to set the skeleton of self.root.
class MyApp(BaseObj):\n def new(self):\n self.need(\"name\")\n self.need(\"foo\", msg=\"please provide a value for foo\")\n self.update_root(\"path/to/skel.yml\")\n
Extending a skeleton'd MyApp is possible just by implementing self.body():
class MyApp(BaseObj):\n def new(self):\n self.need(\"name\")\n self.need(\"foo\", msg=\"please provide a value for foo\")\n self.update_root(\"path/to/skel.yml\")\n\n def body(self):\n self.set_replicas()\n self.root.metadata.labels = {\"app\": \"mylabel\"}\n\ndef set_replicas(self):\n self.root.spec.replicas = 5\n
A component in Kadet is a python module that must implement a main() function returning an instance ofBaseObj. The inventory is also available via the inventory() function.
For example, a tinyapp component:
# components/tinyapp/__init__.py\nfrom kapitan.inputs.kadet import BaseOBj, inventory\ninv = inventory() # returns inventory for target being compiled\n\nclass TinyApp(BaseObj):\n def body(self):\n self.root.foo = \"bar\"\n self.root.replicas = inv.parameters.tinyapp.replicas\n\ndef main():\n obj = BaseOb()\n obj.root.deployment = TinyApp() # will compile into deployment.yml\n return obj\n
A library in --search-paths (which now defaults to . and lib/) can also be a module that kadet components import. It is loaded using the load_from_search_paths():
","tags":["kubernetes","kadet"]},{"location":"kap_proposals/kap_10_azure_key_vault/","title":"Support for Azure Key Management","text":"
This feature will enable users to encrypt secrets using keys stored in Azure's Key Vault. The azkms keyword will be used to access the azure key management backend.
key_id uniquely identifies an Azure key object and it's version stored in Key Vault. It is of the form https://{keyvault-name}.vault.azure.net/{object-type}/{object-name}/{object-version}. It needs to be made accessible to kapitan in one of the following ways:
note Cryptographic algorithm used for encryption would be rsa-oaep-256. Optimal Asymmetric Encryption Padding (OAEP) is a padding scheme often used together with RSA encryption.
"},{"location":"kap_proposals/kap_10_azure_key_vault/#referencing-a-secret","title":"referencing a secret","text":"
Secrets can be refered using ?{azkms:path/to/secret_id} e.g.
The following variables need to be exported to the environment(depending on authentication used) where you will run kapitan refs --reveal in order to authenticate to your HashiCorp Vault instance:
VAULT_ADDR: URL for vault
VAULT_SKIP_VERIFY=true: if set, do not verify presented TLS certificate before communicating with Vault server. Setting this variable is not recommended except during testing
VAULT_TOKEN: token for vault or file (~/.vault-tokens)
VAULT_ROLE_ID: required by approle
VAULT_SECRET_ID: required by approle
VAULT_USERNAME: username to login to vault
VAULT_PASSWORD: password to login to vault
VAULT_CLIENT_KEY: the path to an unencrypted PEM-encoded private key matching the client certificate
VAULT_CLIENT_CERT: the path to a PEM-encoded client certificate for TLS authentication to the Vault server
VAULT_CACERT: the path to a PEM-encoded CA cert file to use to verify the Vault server TLS certificate
VAULT_CAPATH: the path to a directory of PEM-encoded CA cert files to verify the Vault server TLS certificate
VAULT_NAMESPACE: specify the Vault Namespace, if you have one
Considering any stringdata like any.value:whatever-you_may*like ( in our case let\u2019s encrypt any.value:whatever-you_may*like with vault transit ) using the key 2022-02-13-test in a transit secret engine with mount mytransit on the vault server, to use this as a secret either follow:
The entire string \"any.value:whatever-you_may*like\" will be encrypted by vault and looks like this in return: vault:v2:Jhn3UzthKcJ2s+sEiO60EUiDmuzqUC4mMBWp2Vjg/DGl+GDFEDIPmAQpc5BdIefkplb6yrJZq63xQ9s=. This then gets base64 encoded and stored in the secret_inside_kapitan. Now secret_inside_kapitan contains the following
Encoding tells the type of data given to kapitan, if it is original then after decoding base64 we'll get the original secret and if it is base64 then after decoding once we still have a base64 encoded secret and have to decode again. Parameters in the secret file are collected from the inventory of the target we gave from CLI --target my_target. If target isn't provided then kapitan will identify the variables from the environment, but providing auth is necessary as a key inside target parameters like the one shown:
Environment variables that can be defined in kapitan inventory are VAULT_ADDR, VAULT_NAMESPACE, VAULT_SKIP_VERIFY, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY, VAULT_CAPATH & VAULT_CACERT. Extra parameters that can be defined in inventory are:
auth: specify which authentication method to use like token,userpass,ldap,github & approle
mount: specify the mount point of key's path. e.g if path=alpha-secret/foo/bar then mount: alpha-secret (default secret)
crypto_key: Name of the encryption key defined in vault
always_latest: Always rewrap ciphertext to latest rotated crypto_key version Environment variables should NOT be defined in inventory are VAULT_TOKEN,VAULT_USERNAME,VAULT_PASSWORD,VAULT_ROLE_ID,VAULT_SECRET_ID. This makes the secret_inside_kapitan file accessible throughout the inventory, where we can use the secret whenever necessary like ?{vaulttransit:${target_name}/secret_inside_kapitan}
Following is the example file having a secret and pointing to the vault ?{vaulttransit:${target_name}/secret_inside_kapitan}
when ?{vaulttransit:${target_name}/secret_inside_kapitan} is compiled, it will look same with an 8 character prefix of sha256 hash added at the end like:
Only the user with the required tokens/permissions can reveal the secrets. Please note that the roles and permissions will be handled at the Vault level. We need not worry about it within Kapitan. Use the following command to reveal the secrets:
The output path is the path to save the dependency into. For example, it could be /components/external/manifest.jsonnet. Then, the user can specify the fetched file as a kapitan.compile item along with the locally-created files.
Git type may also include ref and subdir parameters as illustrated below:
If the file already exists at output_path, the fetch will be skipped. For fresh fetch of the dependencies, users may add --fetch option as follows:
kapitan compile --fetch\n
Users can also add the force_fetch: true option to the kapitan.dependencies in the inventory in order to force fetch of the dependencies of the target every time.
This will allow kapitan, during compilation, to overwrite the values in user-specified helm charts using its inventory by calling the Go & Sprig template libraries. The helm charts can be specified via local path, and users may download the helm chart via external-dependency feature (of http[s] type).
This feature basically follows the helm template command available. This will run after the fetching of the external dependencies takes place, such that users can simultaneously specify the fetch as well as the import of a helm chart dependency.
C-binding between Helm (Go) and Kapitan (Python) will be created. Helm makes use of two template libraries, namely, text/template and Sprig. The code for helm template command will be converted into shared object (.so) using CGo, which exposes C interface that kapitan (i.e. CPython) could use. The source code for helm template command is found here. This file will be modified to
If a yaml/json output is to be used as k8s manifest, users may specify its kind and have kapitan validate its structure during kapitan compile. The plan is to have this validation feature extendable to other outputs as well, such as terraform.
Create a portable (i.e. static) kapitan binary for users. This executable will be made available for each release on Github. The target/tested platform is Debian 9 (possibly Windows to be supported in the future).
Criteria:
speed of the resulting binary
size of the resulting binary
portability of the binary (single-file executable or has an accompanying folder)
cross-platform
actively maintained
supports Python 3.6, 3.7
Author: @yoshi-1224
"},{"location":"kap_proposals/kap_4_standalone_executable/#tools-to-be-explored","title":"Tools to be explored","text":"
(tentative first-choice) Pyinstaller
(Alternative) nuitka (also part of GSoC 2019. It might soon support single-file executable output).
Rename Secrets into Ref (or References) to improve consistency and meaning of the backend types by removing the ref backend and introducting new backends:
Type Description Encrypted? Compiles To gpg GnuPG Yes hashed tag gkms Google KMS Yes hashed tag awskms Amazon KMS Yes hashed tag base64 base64 No hashed tag plain plain text No plain text
The type value will now need to be representative of the way a reference is stored via its backend.
A new plain backend type is introduced and will compile into revealed state instead of a hashed tag.
A new base64 backend type will store a base64 encoded value as the backend suggests (replacing the old badly named ref backend).
Type Description Encrypted? Compiles To gpg GnuPG Yes hashed tag gkms Google KMS Yes hashed tag awskms Amazon KMS Yes hashed tag ref base64 No hashed tag
However, not all backends are encrypted - this is not consistent!
The ref type is not encrypted as its purpose is to allow getting started with the Kapitan Secrets workflow without the need of setting up the encryption backends tooling (gpg, gcloud, boto, etc...)
The following variables need to be exported to the environment(depending on authentication used) where you will run kapitan refs --reveal in order to authenticate to your HashiCorp Vault instance:
VAULT_ADDR: URL for vault
VAULT_SKIP_VERIFY=true: if set, do not verify presented TLS certificate before communicating with Vault server. Setting this variable is not recommended except during testing
VAULT_TOKEN: token for vault or file (~/.vault-tokens)
VAULT_ROLE_ID: required by approle
VAULT_SECRET_ID: required by approle
VAULT_USERNAME: username to login to vault
VAULT_PASSWORD: password to login to vault
VAULT_CLIENT_KEY: the path to an unencrypted PEM-encoded private key matching the client certificate
VAULT_CLIENT_CERT: the path to a PEM-encoded client certificate for TLS authentication to the Vault server
VAULT_CACERT: the path to a PEM-encoded CA cert file to use to verify the Vault server TLS certificate
VAULT_CAPATH: the path to a directory of PEM-encoded CA cert files to verify the Vault server TLS certificate
VAULT_NAMESPACE: specify the Vault Namespace, if you have one
Considering a key-value pair like my_key:my_secret ( in our case let\u2019s store hello:batman inside the vault ) in the path secret/foo in a kv-v2(KV version 2) secret engine on the vault server, to use this as a secret either follow:
Encoding tells the type of data given to kapitan, if it is original then after decoding base64 we'll get the original secret and if it is base64 then after decoding once we still have a base64 encoded secret and have to decode again. Parameters in the secret file are collected from the inventory of the target we gave from CLI --target dev-sea. If target isn't provided then kapitan will identify the variables from the environment, but providing auth is necessary as a key inside target parameters like the one shown:
Environment variables that can be defined in kapitan inventory are VAULT_ADDR, VAULT_NAMESPACE, VAULT_SKIP_VERIFY, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY, VAULT_CAPATH & VAULT_CACERT. Extra parameters that can be defined in inventory are:
auth: specify which authentication method to use like token,userpass,ldap,github & approle
mount: specify the mount point of key's path. e.g if path=alpha-secret/foo/bar then mount: alpha-secret (default secret)
engine: secret engine used, either kv-v2 or kv (default kv-v2) Environment variables cannot be defined in inventory are VAULT_TOKEN,VAULT_USERNAME,VAULT_PASSWORD,VAULT_ROLE_ID,VAULT_SECRET_ID. This makes the secret_inside_kapitan file accessible throughout the inventory, where we can use the secret whenever necessary like ?{vaultkv:path/to/secret_inside_kapitan}
Following is the example file having a secret and pointing to the vault ?{vaultkv:path/to/secret_inside_kapitan}
Only the user with the required tokens/permissions can reveal the secrets. Please note that the roles and permissions will be handled at the Vault level. We need not worry about it within Kapitan. Use the following command to reveal the secrets:
This feature would add the ability to Kapitan to fetch parts of the inventory from remote locations (https/git). This would allow users to combine different inventories from different sources and build modular infrastructure reusable across various repos.
On executing the $ kapitan compile --fetch command, first the remote inventories will be fetched followed by fetching of external dependencies and finally merge the inventory to compile.
"},{"location":"kap_proposals/kap_7_remote_inventory/#copying-inventory-files-to-the-output-location","title":"Copying inventory files to the output location","text":"
The output path is the path to save the inventory items into. The path is relative to the inventory/ directory. For example, it could be /classes/. The contents of the fetched inventory will be recursively copied.
The fetched inventory files will be cached in the .dependency_cache directory if --cache is set. eg. $ kapitan compile --fetch --cache
"},{"location":"kap_proposals/kap_8_google_secret_management/#using-a-secret","title":"Using a secret","text":"
In GCP, a secret contains one or more secret versions, along with its metadata. The actual contents of a secret are stored in a secret version. Each secret is identified by a name. We call that variable secret_id e.g. my_treasured_secret. The URI of the secret becomes projects/<Project_Id>/secrets/my_treasured_secret
The following command will be used to add a secret_id to kapitan.
Kapitan is packaged in PYPI and as a binary along with all its dependencies. Adding an extra key/security backend means that we need to ship another dependency with that PYPI package, making deploying changes more complicated. This project would modularize kapitan into core dependencies and extra modules.
The main module includes the essential kapitan dependencies and reclass dependencies, which will be included in the \u200brequirement.txt\u200b file.
The extra module pypi extras will be defined in the s\u200betup.py\u200b file.
The extra dependencies are of secret backends like (AWS Key backend, Google KMS Key backend, Vault Key backend etc.) and Helm support.
"},{"location":"kap_proposals/kap_9_bring_your_own_helm/","title":"Bring Your Own Helm Proposal","text":""},{"location":"kap_proposals/kap_9_bring_your_own_helm/#the-problem","title":"The Problem","text":"
Currently the helm binding can't be run on Mac OSX. Attempts to fix this have been made on several occasions:
https://github.com/kapicorp/kapitan/pull/414
https://github.com/kapicorp/kapitan/pull/547
https://github.com/kapicorp/kapitan/pull/568
There are some issues with the current bindings besides the lack of Mac OSX support. The golang runtime (1.14) selected will effect older versions helm templates: https://github.com/helm/helm/issues/7711. Users can't select the version of helm they'd like to use for templating.
Users supply their own helm binary. This allows them to control the version of golang runtime and version of helm they'd like to use.
In Kapitan we could rewrite the interface to use subprocess and perform commands. The cli of helm 2 vs helm 3 is slightly different but shouldn't be difficult to codify.
This would be great to get rid of cffi and golang which will reduce complexity and build time of the project.
Depending on how this goes, this could pave the way for a \"bring your own binary\" input type.
Say we want to fetch the source code from our kapitan repository, specifically, kapicorp/kapitan/kapitan/version.py. Let's create a very simple target file inventory/targets/kapitan-example.yml.
parameters:\n kapitan:\n vars:\n target: kapitan-example\n dependencies:\n - type: git\n output_path: source/kapitan\n source: git@github.com:kapicorp/kapitan.git\n subdir: kapitan\n ref: master\n submodules: true\n compile:\n - input_paths:\n - source/kapitan/version.py\n input_type: jinja2 # just to copy the file over to target\n output_path: .\n
Say we want to download kapitan README.md file. Since it's on Github, we can access it as https://raw.githubusercontent.com/kapicorp/kapitan/master/README.md. Using the following inventory, we can copy this to our target folder:
Fetches helm charts and any specific subcharts in the requirements.yaml file.
helm_path can be used to specify where the helm binary name or path. It defaults to the value of the KAPITAN_HELM_PATH environment var or simply to helm if neither is set. You should specify only if you don't want the default behavior.
source can be either the URL to a chart repository, or the URL to a chart on an OCI registry (supported since Helm 3.8.0).
If we want to download the prometheus helm chart we simply add the dependency to the monitoring target. We want a specific version 11.3.0 so we put that in.
"},{"location":"pages/kapitan_overview/","title":"Kapitan Overview","text":""},{"location":"pages/kapitan_overview/#kapitan-at-a-glance","title":"Kapitan at a glance","text":"
Kapitan is a powerful configuration management tool designed to help engineers manage complex systems through code. It centralizes and simplifies the management of configurations with a structured approach that revolves around a few core concepts.
At the core of Kapitan lies the Inventory, a structured database of variables meticulously organized in YAML files. This hierarchical setup serves as the single source of truth (SSOT) for your system's configurations, making it easier to manage and reference the essential components of your infrastructure. Whether you're dealing with Kubernetes configurations, Terraform resources, or even business logic, the Inventory allows you to define and store these elements efficiently. This central repository then feeds into Kapitan's templating engines, enabling seamless reuse across various applications and services.
Kapitan takes the information stored in the Inventory and brings it to life through its templating engines upon compilation. This process transforms static data into dynamic configurations, capable of generating a wide array of outputs like Kubernetes manifests, Terraform plans, documentation, and scripts. It's about making your configurations work for you, tailored to the specific needs of your projects.
Generators offer a straightforward entry point into using Kapitan, requiring minimal to no coding experience. These are essentially pre-made templates that allow you to generate common configuration files, such as Kubernetes manifests, directly from your Inventory data. Kapitan provides a wealth of resources, including the Kapitan Reference GitHub repository and various blog posts, to help users get up and running with generators.
For those looking to leverage the full power of Kapitan, Kadet introduces a method to define and reuse complex configurations through Python. This internal library facilitates the creation of JSON and YAML manifests programmatically, offering a higher degree of customization and reuse. Kadet empowers users to craft intricate configurations with the simplicity and flexibility of Python.
Kapitan References provide a secure way to store passwords, settings, and other essential data within your project. Think of them as special code placeholders.
Flexibility: Update a password once, and Kapitan updates it everywhere automatically.
Organization: References tidy up your project, especially when you're juggling multiple settings or environments (dev, staging, production). Security: Protect sensitive information like passwords with encryption
Tip
Use Tesoro, our Kubernetes Admission Controller, to complete your integration with Kubernetes for secure secret decryption on-the-fly.
Kapitan is capable of recursively fetching inventory items stored in remote locations and copy it to the specified output path. This feature can be used by specifying those inventory items in classes or targets under parameters.kapitan.inventory. Supported types are:
git type
http type
Class items can be specified before they are locally available as long as they are fetched in the same run. Example of this is given below.
Git types can fetch external inventories available via HTTP/HTTPS or SSH URLs. This is useful for fetching repositories or their sub-directories, as well as accessing them in specific commits and branches (refs).
Note: git types require git binary on your system.
Lets say we want to fetch a class from our kapitan repository, specifically kapicorp/kapitan/tree/master/examples/docker/inventory/classes/dockerfiles.yml.
Lets create a simple target file docker.yml
Note
external dependencies are used to fetch dependency items in this example.
[WARNING] Reclass class not found: 'dockerfiles'. Skipped!\n[WARNING] Reclass class not found: 'dockerfiles'. Skipped!\nInventory https://github.com/kapicorp/kapitan: fetching now\nInventory https://github.com/kapicorp/kapitan: successfully fetched\nInventory https://github.com/kapicorp/kapitan: saved to inventory/classes\nDependency https://github.com/kapicorp/kapitan: saved to components\nDependency https://github.com/kapicorp/kapitan: saved to templates\nCompiled docker (0.11s)\n
"},{"location":"pages/blog/","title":"Blog","text":""},{"location":"pages/blog/04/12/2022/kapitan-logo-5-years-of-kapitan/","title":"5 Years of Kapitan","text":"
Last October we quietly celebrated 5 years of Kapitan.
In 5 years, we've been able to witness a steady and relentless of Kapitan, which has however never caught the full attention of the majority of the community.
The main issue has always been around an embarassing lack of documentation, and we've worked hard to improve on that, with more updates due soon.
Let this first blog post from a revamped website be a promise to our community of a better effort in explaining what sets Kapitan apart, and makes it the only tool of its kind.
And let's start with a simple question: Why do you even need Kapitan?
Credits
In reality Kapitan's heatbeat started about 9 months earlier at DeepMind Health, created by [**Ricardo Amaro**](https://github.com/ramaro) with the help of some of my amazing team: in no particular order [Adrian Chifor](https://github.com/adrianchifor), [Paul S](https://github.com/uberspot) and [Luis Buriola](https://github.com/gburiola). It was then kindly released to the community by Google/DeepMind and is has so been improved thanks to more than [50 contributors](https://github.com/kapicorp/kapitan/graphs/contributors).\n
"},{"location":"pages/blog/04/12/2022/kapitan-logo-5-years-of-kapitan/#why-do-i-need-kapitan","title":"Why do I need Kapitan?","text":"
Kapitan is a hard sell, but a rewarding one. For these main reasons:
Kapitan solves problems that some don\u2019t know/think to have.
Some people by now have probably accepted the Status Quo and think that some suffering is part of their job descriptions.
Objectively, Kapitan requires an investment of effort to learn how to use a new tool, and this adds friction.
All I can say it is very rewarding once you get to use it, so stick with me while I try to explain the problems that Kapitan is solving
It would be reductive to list the problems that Kapitan solves, because sometimes we ourselves are stunned by what Kapitan is being used for, so I will start with some common relatable ones, and perhaps that will give you the right framing to understand how to use it with your setup.
In its most basic explanation, Kapitan solves the problem of avoiding duplication of configuration data: by consolidating it in one place (the Inventory), and making it accessible by all the tools and languages it integrates with (see Input Types).
This configuration data is then used by Kapitan (templates) to configure and operate a number of completely distinct and unaware tools which would normally not be able to share their configurations.
Let's consider the case where you want to define a new bucket, with a given bucket_name. Without Kapitan you would probably need to:
Write a PR on your Terraform repository to create the new bucket.
Which name should I use? Make sure to write it down! CTRL-C
Write a PR for your values.yaml file to configure your Helm chart: <CTRL-V>
Write somewhere some documentation to write down the bucket name and why it exists. Another <CTRL-V>
Another PR to change some **kustomize** configuration for another service to tell it to use the new bucket <CTRL-V>
Days after, time to upload something to that bucket: gsutil cp my_file wait_what_was_the_bucket_name_again.. Better check the documentation: CTRL-C + <CTRL-V>
When using Kapitan, your changes are likely to be contained within one PR, from which you can have a full view of everything that is happening. What happens is explained in this flow
\n%%{ init: { securityLevel: 'loose'} }%%\ngraph LR\n classDef pink fill:#f9f,stroke:#333,stroke-width:4px,color:#000,font-weight: bold;\n classDef blue fill:#00FFFF,stroke:#333,stroke-width:4px,color:#000,font-weight: bold;\n classDef bold color:#000,font-weight: bold;\n\n DATA --> KAPITAN\n BUCKET --> DATA\n KAPITAN --> KUBERNETES\n KAPITAN --> TERRAFORM\n KAPITAN --> DOCUMENTATION\n KAPITAN --> SCRIPT\n KAPITAN --> HELM\n KUBERNETES --> BUCKET_K8S\n TERRAFORM --> BUCKET_TF\n DOCUMENTATION --> BUCKET_DOC\n SCRIPT --> BUCKET_SCRIPT\n HELM --> BUCKET_HELM\n\n\n DATA[(\"All your data\")]\n BUCKET(\"bucket_name\")\n KAPITAN((\"<img src='/images/kapitan_logo.png'; width='150'/>\")):::blue\n\n\n subgraph \" \"\n KUBERNETES([\"Kubernetes\"]):::pink\n BUCKET_K8S(\".. a ConfigMap uses bucket_name\"):::bold\n end\n subgraph \" \"\n TERRAFORM([\"Terraform\"]):::pink\n BUCKET_TF(\"..creates the bucket bucket_name\"):::bold\n end\n subgraph \" \"\n DOCUMENTATION([\"Documentation\"]):::pink\n BUCKET_DOC(\"..references a link to bucket_name\"):::bold\n end\n subgraph \" \"\n SCRIPT([\"Canned Script\"]):::pink\n BUCKET_SCRIPT(\"..knows how to upload files to bucket_name\"):::bold\n end\n subgraph \" \"\n HELM([\"Helm\"]):::pink\n BUCKET_HELM(\"..configures a chart to use the bucket_name\"):::bold\n end
Thanks to its flexiblility, you can use Kapitan to generate all sorts of configurations: Kubernetes and Terraform resources, ArgoCD pipelines, Docker Compose files, random configs, scripts, documentations and anything else you find relevant. The trick is obviously on how to drive these changes, but it is not as complicated as it sounds. We'll get there soon enough!
Let's see now another example of things that are so established in the way to do things that become elusivly impossible to see. As a way to highlight the potential issues with this way of doing things, let's ask some questions on your current setup. We pick on Kubernetes this time.
I\u2019ll start with Kubernetes, such a popular and brilliant solution to problems most people should not be concerned with (jokes apart, I adore Kubernetes). To most, Kubernetes is that type of solution that quickly turns into a problem of its own right.
So.. how do you deploy to Kubernetes right now?
Helm comes to mind first, right?
Kapitan + Helm: BFF
In spite of Kapitan being initially considered (even by ourselves) as an alternative to Helm, we\u2019ve actually enjoyed the benefits of integrating with this amazing tool and the ecosystem it gives us access to. So yes, good news: you can use Helm right from within Kapitan!.
Well, let\u2019s put that to a test. How do you manage your Helm charts? I\u2019ll attempt to break these questions down into categories.
What about the official ones that you didn't create yourself?
How many values.yaml files do you have?
How much consistency is there between them? any snowflakes?
If you change something, like with the bucket_name example above:
how many places do you need to go and update?
And how many times do you get it wrong?
Don't you feel all your charts look the same?
Yet how many times do you need to deviate from the one you thought captured everything?
What if you need to make a change to all your charts at once: how do you deal with it?
What about configuration files, how do you deal with templating those?
How do you deal with \u201cofficial\u201d charts, do they always cover what you want to do?
How do you deal with modifications that you need to apply to your own version of a an official chart?
What if you need to make a change that affects ALL your charts?
Or if the change is for all the charts for a set of microservices?
How many times you find yourself seting parameters on the command line of Helm and other tools?
How many times did you connect to the wrong context in Kubernetes
How many of your colleagues have the same clean context setup as you have?
How many things are there that you wish you were tracking?
How do I connect to the production database? Which user is it again?
How easy is it for you to create a new environment from scratch?
Are you sure?
When was the last time you tried?
How easy is it to keep your configuration up to date?
Does your documentation need to be \u201cunderstood\u201d or can be just executed on?
How many conditionals like this do you have in your documentation?
NOTE: Cluster X in project Y has an older version of Q and requires you to do Z instead N because of A, B and C!
Would you be able to follow those instructions at 3am on a Sunday morning?
How do you handle secrets in your repository?
Do you know how to create your secrets from scratch?
Do you remember that token you created 4 months ago? How did you do that?
How long would it take you?
Is the process of creating them \u201csecure\u201d?
Or does it leave you with random certificates and tokens unencrypted on your \u201cDownloads\u201d folder?
The above concerns: do they also apply to other things you manage?
Terraform?
Pipelines?
Random other systems you interact with?
I\u2019ll stop here because I do not want to lose you, and neither do I want to discourage you.
But if you look around it\u2019s true, you do have a very complicated setup. And Kapitan can help you streamline it for you. In fact, Kapitan can leave you with a consistent and uniform way to manage all these concerns at once.
My job here is done: you have awakened and you won't look at your setup in the same way. Keep tuned and learn about how Kapitan can change the way you do things.
The Kapicorp team is happy to to announce a new release of Kapitan.
This release is yet another great bundle of features and improvements over the past year, the majority of which have been contributions from our community!
Head over our release page on GitHub for a full list of features and contributors.
If you missed it, have a look at our latest blog post here 5 years of Kapitan
Please help us by visiting our Sponsor Kapitan page.
The Kapicorp team is happy to to announce a new release of Kapitan.
This release contains loads of improvements for the past 6 months, the majority of which have been contributions from our community!
Head over our release page on GitHub for a full list of features and contributors.
Please help us by visiting our Sponsor Kapitan page.
"},{"location":"pages/blog/27/08/2023/kapitan-logo-deploying-keda-with-kapitan/","title":"Deploying Keda with Kapitan","text":"
We have worked hard to bring out a brand new way of experience Kapitan, through something that we call generators
Although the concept is something we've introduced in 2020 with our blog post Keep your ship together with Kapitan, the sheer amount of new capabilities (and frankly, the embarassing lack of documentation and examples) forces me to show you the new capabilities using a practicle example: deploying Keda.
"},{"location":"pages/blog/27/08/2023/kapitan-logo-deploying-keda-with-kapitan/#objective-of-this-tutorial","title":"Objective of this tutorial","text":"
We are going to deploy Keda using the helm chart approach. While Kapitan supports a native way to deploy helm charts using the helm input type, we are going instead to use a generator based approach using the \"charts\" generator.
This tutorial will show you how to configure kapitan to:
download a helm chart
compile a helm chart
modify a helm chart using mutations
The content of this tutorial is already available on the kapitan-reference
## inventory/classes/components/keda.yml\nparameters:\n keda:\n params:\n # Variables to reference from other places\n application_version: 2.11.2\n service_account_name: keda-operator\n chart_name: keda\n chart_version: 2.11.2\n chart_dir: system/sources/charts/${keda:params:chart_name}/${keda:params:chart_name}/${keda:params:chart_version}/${keda:params:application_version}\n namespace: keda\n helm_values: {}\n...\n
Override Helm Values
As an example we could be passing to helm an override to the default values parameters to make the operator deploy 2 replicas.
helm_values:\n operator:\n replicaCount: 2\n
"},{"location":"pages/blog/27/08/2023/kapitan-logo-deploying-keda-with-kapitan/#download-the-chart","title":"Download the chart","text":"
Kapitan supports downloading dependencies, including helm charts.
When Kapitan is run with the --fetch, it will download the dependency if not already present. Use --force-fetch if you want to download it every time. Learn more about External dependencies
## inventory/classes/components/keda.yml\n...\n kapitan:\n dependencies:\n # Tells kapitan to download the helm chart into the chart_dir directory\n - type: helm\n output_path: ${keda:params:chart_dir}\n source: https://kedacore.github.io/charts\n version: ${keda:params:chart_version}\n chart_name: ${keda:params:chart_name}\n...\n
Parameter interpolation
Notice how we are using parameter interpolation from the previously defined keda.params section. This will make it easier in the future to override some aspects of the configuration on a per-target base.
"},{"location":"pages/blog/27/08/2023/kapitan-logo-deploying-keda-with-kapitan/#generate-the-chart","title":"Generate the chart","text":"
## inventory/classes/components/keda.yml\n...\n charts:\n # Configures a helm generator to compile files for the given chart\n keda:\n chart_dir: ${keda:params:chart_dir}\n helm_params:\n namespace: ${keda:params:namespace}\n name: ${keda:params:chart_name}\n helm_values: ${keda:params:helm_values}\n
Now when we run kapitan compile we will see the chart being donwloaded and the manifests being produced.
./kapitan compile -t keda --fetch\nDependency keda: saved to system/sources/charts/keda/keda/2.11.2/2.11.2\nRendered inventory (1.87s)\nCompiled keda (2.09s)\n
kapitan compile breakdown
--fetch tells kapitan to fetch the chart if it is not found locally
-t keda tells kapitan to compile only the previously defined keda.yml target
ls -l compiled/keda/manifests/\ntotal 660\n-rw-r--r-- 1 ademaria root 659081 Aug 29 10:25 keda-bundle.yml\n-rw-r--r-- 1 ademaria root 79 Aug 29 10:25 keda-namespace.yml\n-rw-r--r-- 1 ademaria root 7092 Aug 29 10:25 keda-rbac.yml\n-rw-r--r-- 1 ademaria root 1783 Aug 29 10:25 keda-service.yml\n
Now let's do a couple of things that would not be easy to do with helm natively.
You can already notice that the content of the chart is being splitted into multiple files: this is because the Generator is configured to separate different resources types into different files for convenience and consistency. The mechanism behing it is the \"Mutation\" of type \"bundle\" which tells Kapitan which file to save a resource into.
Here are some example \"mutation\" which separates different kinds into different files
Currently most of the keda related resources are bundled into the -bundle.yml file Instead, we want to separate them into their own file.
Let's add this configuration:
charts:\n # Configures a helm generator to compile files for the given chart\n keda:\n chart_dir: ${keda:params:chart_dir}\n ...\n mutations:\n bundle:\n - conditions:\n # CRDs need to be setup separately\n kind: [CustomResourceDefinition]\n filename: '{content.component_name}-crds'\n
Upon compile, you can now see that the CRD are being moved to a different file:
ls -l compiled/keda/manifests/\ntotal 664\n-rw-r--r-- 1 ademaria root 11405 Aug 29 10:56 keda-bundle.yml\n-rw-r--r-- 1 ademaria root 647672 Aug 29 10:56 keda-crds.yml\n-rw-r--r-- 1 ademaria root 79 Aug 29 10:56 keda-namespace.yml\n-rw-r--r-- 1 ademaria root 7092 Aug 29 10:56 keda-rbac.yml\n-rw-r--r-- 1 ademaria root 1783 Aug 29 10:56 keda-service.yml\n
With this tutorial have explored some capabilities of Kapitan to manage and perform changes to helm charts. Next tutorial will show how to make use of Keda and deploy a generator for Keda resources
"},{"location":"pages/commands/kapitan_compile/#fetch-on-compile","title":"Fetch on compile","text":"
Use the --fetch flag to fetch Remote Inventories and the External Dependencies.
kapitan compile --fetch\n
This will download the dependencies according to their configurations By default, kapitan does not overwrite an existing item with the same name as that of the fetched inventory items.
Use the --force-fetch flag to force fetch (update cache with freshly fetched items) and overwrite inventory items of the same name in the output_path.
kapitan compile --force-fetch\n
Use the --cache flag to cache the fetched items in the .dependency_cache directory in the root project directory.
The --embed-refs flags tells Kapitan to embed these references on compile, alongside the generated output. By doing so, compiled output is self-contained and can be revealed by Tesoro or other tools.
kapitan compile --embed-refs\n
See how the compiled output for this specific target changes to embed the actul encrypted content, (marked by ?{gpg: :embedded} to indicate it is a gpg reference) rather than just holding a reference to it (like in this case ?{gpg:targets/minikube-mysql/mysql/password:ec3d54de} which points to ).
Kapitan allows you to coveniently override defaults by specifying a local .kapitan file in the root of your repository (relative to the kapitan configuration):
This comes handy to make sure Kapitan runs consistently for your specific setup.
Info
Any Kapitan command can be overridden in the .kapitan dotfile, but here are some of the most common examples.
To enforce the Kapitan version used for compilation (for consistency and safety), you can add version to .kapitan:
version: 0.30.0\n\n...\n
This constrain can be relaxed to allow minor versions to be also accepted:
version: 0.30 # Allows any 0.30.x release to run\n\n...\n
"},{"location":"pages/commands/kapitan_dotfile/#command-line-flags","title":"Command line flags","text":"
You can also permanently define all command line flags in the .kapitan config file. For example:
...\n\ncompile:\n indent: 4\n parallelism: 8\n
would be equivalent to running:
kapitan compile --indent 4 --parallelism 8\n
For flags which are shared by multiple commands, you can either selectively define them for single commmands in a section with the same name as the command, or you can set any flags in section global, in which case they're applied for all commands. If you set a flag in both the global section and a command's section, the value from the command's section takes precedence over the value from the global section.
As an example, you can configure the inventory-path in the global section of the Kapitan dotfile to make sure it's persisted across all Kapitan runs.
...\n\nglobal:\n inventory-path: ./some_path\n
which would be equivalent to running any command with --inventory-path=./some_path.
Another flag that you may want to set in the global section is inventory-backend to select a non-default inventory backend implementation.
global:\n inventory-backend: reclass\n
which would be equivalent to always running Kapitan with --inventory-backend=reclass.
Please note that the inventory-backend flag currently can't be set through the command-specific sections of the Kapitan config file.
Running yamllint on all inventory files...\n\n.yamllint not found. Using default values\nFile ./inventory/classes/components/echo-server.yml has the following issues:\n 95:29: forbidden implicit octal value \"0550\" (octal-values)\nFile ./inventory/classes/terraform/gcp/services.yml has the following issues:\n 15:11: duplication of key \"enable_compute_service\" in mapping (key-duplicates)\n\nTotal yamllint issues found: 2\n\nChecking for orphan classes in inventory...\n\nNo usage found for the following 6 classes:\n{'components.argoproj.cd.argocd-server-oidc',\n'components.helm.cert-manager-helm',\n'components.rabbitmq-operator.rabbitmq-configuration',\n'components.rabbitmq-operator.rabbitmq-operator',\n'features.gkms-demo',\n'projects.localhost.kubernetes.katacoda'}\n
Validates the schema of compiled output. Validate options are specified in the inventory under parameters.kapitan.validate. Supported types are:
"},{"location":"pages/commands/kapitan_validate/#usage","title":"Usage","text":"standalonemanual with kapitan compileautomatic with .kapitan dotfile
kapitan validate\n
click to expand output
created schema-cache-path at ./schemas\nValidation: manifest validation successful for ./compiled/minikube-mysql/manifests/mysql_secret.yml\nValidation: manifest validation successful for ./compiled/minikube-mysql/manifests/mysql_service_jsonnet.yml\nValidation: manifest validation successful for ./compiled/minikube-mysql/manifests/mysql_service_simple.yml\n
Kubernetes has different resource kinds, for instance:
service
deployment
statefulset
Kapitan has built in support for validation of Kubernetes kinds, and automatically integrates with https://kubernetesjsonschema.dev. See github.com/instrumenta/kubernetes-json-schema for more informations.
Info
Kapitan will automatically download the schemas for Kubernetes Manifests directly from https://kubernetesjsonschema.dev
By default, the schemas are cached into ./schemas/, which can be modified with the --schemas-path option.
override permanently schema-path
Remember to use the .kapitan dotfile configuration to override permanently the schema-path location.
$ cat .kapitan\n# other options abbreviated for clarity\nvalidate:\n schemas-path: custom/schemas/cache/path\n
Many of our features come from contributions from external collaborators. Please help us improve Kapitan by extending it with your ideas, or help us squash bugs you discover.
It's simple, just send us a PR with your improvements!
We would like ask you to fork Kapitan project and create a Pull Request targeting master branch. All submissions, including submissions by project members, require review.
poetry install --all-extras --with dev --with test --with docs\n
Poetry creates a virtual environment with the required dependencies installed.
Run kapitan with the own compiled code
poetry run kapitan <your command>\n
Because we are using a pinned version of reclass which is added as a submodule into Kapitan's repository, you need to pull it separately by executing the command below:
Run make test to run all tests. If you modify anything in the examples/ folder make sure you replicate the compiled result of that in tests/test_kubernetes_compiled. If you add new features, run make test_coverage && make test_formatting to make sure the test coverage remains at current or better levels and that code formatting is applied.
If you would like to evaluate your changes by running your version of Kapitan, you can do that by running bin/kapitan from this repository or even setting an alias to it.
To make sure you adhere to the Style Guide for Python (PEP8) Python Black is used to apply the formatting so make sure you have it installed with pip3 install black.
","tags":["community"]},{"location":"pages/contribute/code/#apply-via-git-hook","title":"Apply via Git hook","text":"
Run pip3 install pre-commit to install precommit framework.
In the Kapitan root directory, run pre-commit install
Create a branch named release-v<NUMBER>. Use v0.*.*-rc.* if you want pre-release versions to be uploaded.
Update CHANGELOG.md with the release changes.
Once reviewed and merged, Github Actions will auto-release.
The merge has to happen with a merge commit not with squash/rebase so that the commit message still mentions kapicorp/release-v* inside.
","tags":["community"]},{"location":"pages/contribute/code/#packaging-extra-resources-in-python-package","title":"Packaging extra resources in python package","text":"
To package any extra resources/files in the pip package, make sure you modify both MANIFEST.in.
","tags":["community"]},{"location":"pages/contribute/code/#leave-a-comment","title":"Leave a comment","text":"","tags":["community"]},{"location":"pages/contribute/documentation/","title":"Documentation","text":"
Our documentation usully prevents new users from adopting Kapitan. Help us improve by contributing with fixes and keeping it up-to-date.
Find something odd? Let us know or change it yourself: you can edit pages of this website on Github by clicking the pencil icon at the top right of this page!
We use mkdocs to generate our gh-pages from .md files under docs/ folder.
Updating our gh-pages is therefore a two-step process.
","tags":["community"]},{"location":"pages/contribute/documentation/#update-the-markdown","title":"Update the markdown","text":"
Submit a PR for our master branch that updates the .md file(s). Test how the changes would look like when deployed to gh-pages by serving it on localhost:
Edit the strict property in mkdocs.yml and set it to false.
make local_serve_documentation
Now the documentation site should be available at localhost:8000.
","tags":["community"]},{"location":"pages/contribute/documentation/#submit-a-pr","title":"Submit a PR","text":"
Once the above PR has been merged, our CI will deploy your docs automatically.
This input type simply copies the input templates to the output directory without any rendering/processing. For Copy, input_paths can be either a file or a directory: in case of a directory, all the templates in the directory will be copied and outputted to output_path.
Supported output types: N/A (no need to specify output_type)
Example
kapitan:\n compile:\n - input_type: copy\n ignore_missing: true # Do not error if path is missing. Defaults to False\n input_paths:\n - resources/state/${target_name}/.terraform.lock.hcl\n output_path: terraform/\n
This input type executes an external script or binary. This can be used to manipulate already compiled files or execute binaries outside of kapitan that generate or manipulate files.
For example, ytt is a useful yaml templating tool. It is not built into the kapitan binary, however, with the external input type, we could specify the ytt binary to be executed with specific arguments and environment variables.
In this example, we're removing a label from a k8s manifests in a directory ingresses and placing it into the compiled target directory.
Supported output types: N/A (no need to specify output_type)
Additionally, the input type supports field env_vars, which can be used to set environment variables for the external command. By default, the external command doesn't inherit any environment variables from Kapitan's environment. However, if environment variables $PATH or $HOME aren't set in env_vars, they will be propagated from Kapitan's environment to the external command's environment.
Finally, Kapitan will substitute ${compiled_target_dir} in both the command's arguments and the environment variables. This variable needs to be escaped in the configuration to ensure that reclass won't interpret it as a reclass reference.
"},{"location":"pages/input_types/helm/","title":"Input Type | Helm","text":"
This is a Python binding to helm template command for users with helm charts. This does not require the helm executable, and the templates are rendered without the Tiller server.
Unlike other input types, Helm input types support the following additional parameters under kapitan.compile:
helm_values is an object containing values specified that will override the default values in the input chart. This has exactly the same effect as specifying --values custom_values.yml for helm template command where custom_values.yml structure mirrors that of helm_values.
helm_values_files is an array containing the paths to helm values files used as input for the chart. This has exactly the same effect as specifying --file my_custom_values.yml for the helm template command where my_custom_values.yml is a helm values file. If the same keys exist in helm_values and in multiple specified helm_values_files, the last indexed file in the helm_values_files will take precedence followed by the preceding helm_values_files and at the bottom the helm_values defined in teh compile block. There is an example in the tests. The monitoring-dev(kapitan/tests/test_resources/inventory/targets/monitoring-dev.yml) and monitoring-prd(kapitan/tests/test_resources/inventory/targets/monitoring-prd.yml) targets both use the monitoring(tests/test_resources/inventory/classes/component/monitoring.yml) component. This component has helm chart input and takes a common.yml helm_values file which is \"shared\" by any target that uses the component and it also takes a dynamically defined file based on a kapitan variable defined in the target.
helm_path can be use to provide the helm binary name or path. helm_path defaults to the value of KAPITAN_HELM_PATH env var if it is set, else it defaults to helm
helm_params correspond to the flags for helm template. Most flags that helm supports can be used here by replacing '-' by '_' in the flag name.
Flags without argument must have a boolean value, all other flags require a string value.
Special flags:
name: equivalent of helm template [NAME] parameter. Ignored if name_template is also specified. If neither name_template nor name are specified, the --generate-name flag is used to generate a name.
output_file: name of the single file used to output all the generated resources. This is equivalent to call helm template without specifing output dir. If not specified, each resource is generated into a distinct file.
include_crds and skip_tests: These flags are enabled by default and should be set to false to be removed.
debug: prints the helm debug output in kapitan debug log.
namespace: note that due to the restriction on helm template command, specifying the namespace does not automatically add metadata.namespace property to the resources. Therefore, users are encouraged to explicitly specify it in all resources:
metadata:\n namespace: {{ .Release.Namespace }} # or any other custom values\n
Let's use nginx-ingress helm chart as the input. Using kapitan dependency manager, this chart can be fetched via a URL as listed in https://helm.nginx.com/stable/index.yaml.
On a side note, https://helm.nginx.com/stable/ is the chart repository URL which you would helm repo add, and this repository should contain index.yaml that lists out all the available charts and their URLs. By locating this index.yaml file, you can locate all the charts available in the repository.
We can use version 0.3.3 found at https://helm.nginx.com/stable/nginx-ingress-0.3.3.tgz. We can create a simple target file as inventory/targets/nginx-from-chart.yml whose content is as follows:
The chart is fetched before compile, which creates components/charts/nginx-ingress folder that is used as the input_paths for the helm input type. To confirm if the helm_values actually has overridden the default values, we can try:
Step Flag Description Configuration Inventory Kapitan uses reclass to render a final version of the inventory. Fetch --fetch Kapitan fetches external dependencies parameters.kapitan.dependencies Compile Kapitan compiles the input types for each target parameters.kapitan.compile Reveal --reveal Kapitan reveals the secrets directly in the compiled output parameters.kapitan.secrets Copy Kapitan moves the output files from the tmp directory to /compiled Validate --validate Kapitan validates the schema of compiled output. parameters.kapitan.validate Finish Kapitan has completed all tasks"},{"location":"pages/input_types/introduction/#supported-input-types","title":"Supported input types","text":"
Input types can be specified in the inventory under kapitan.compile in the following format:
We define a list with all the templates we want to compile with this input type
Then input type will render the files a the root of the target compiled folder e.g. compiled/${target_name}
We pass the list as input_paths
Notice how make use of variable interpolation to use the convenience of a list to add all the files we want to compile. You can now simply add to that list from any other place in the inventory that calls that class.
input_paths can either be a file, or a directory: in case of a directory, all the templates in the directory will be rendered.
input_params (optional) can be used to pass extra parameters, helpful when needing to use a similar template for multiple components in the same target.
We usually store documentation templates under the templates/docs directory.
examples/kubernetes/docs/nginx/README.md
{% set i = inventory.parameters %}\n\n# Welcome to the README!\n\nTarget *{{ i.target_name }}* is running:\n\n* {{ i.nginx.replicas }} replicas of *nginx* running nginx image {{ i.nginx.image }}\n* on cluster {{ i.cluster.name }}\n
Compiled result
# Welcome to the README!\n\nTarget *minikube-nginx-jsonnet* is running:\n\n* 1 replicas of *nginx* running nginx image nginx:1:15.8\n* on cluster minikube\n
When we use Jinja to render scripts, we tend to call them \"canned scripts\" to indicate that these scripts have everything needed to run without extra parameters.
We usually store script templates under the templates/scripts directory.
examples/kubernetes/components/nginx-deploy.sh
#!/bin/bash -e\nDIR=$(dirname ${BASH_SOURCE[0]})\n{% set i = inventory.parameters %} #(1)!\n\nKUBECTL=\"kubectl -n {{i.namespace}}\" #(2)!\n\n# Create namespace before anything else\n${KUBECTL} apply -f ${DIR}/pre-deploy/namespace.yml\n\nfor SECTION in manifests\ndo\n echo \"## run kubectl apply for ${SECTION}\"\n ${KUBECTL} apply -f ${DIR}/${SECTION}/ | column -t\ndone\n
We import the inventory as a Jinja variable
We use to set the namespace explicitly
Compiled result
#!/bin/bash -e\nDIR=$(dirname ${BASH_SOURCE[0]})\n #(1)!\n\nKUBECTL=\"kubectl -n minikube-nginx-jsonnet\" #(2)!\n\n# Create namespace before anything else\n${KUBECTL} apply -f ${DIR}/pre-deploy/namespace.yml\n\nfor SECTION in manifests\ndo\n echo \"## run kubectl apply for ${SECTION}\"\n ${KUBECTL} apply -f ${DIR}/${SECTION}/ | column -t\ndone\n
The script is now a \"canned script\" and ready to be used for this specif target.
You can see that the namespace has been replaced with the target's one.
"},{"location":"pages/input_types/jinja/#accessing-the-inventory","title":"Accessing the inventory","text":"
Templates will be provided at runtime with 3 variables:
inventory: To access the inventory for that specific target.
inventory_global: To access the inventory of all targets.
input_params: To access the optional dictionary provided to the input type.
Use of inventory_global
inventory_global can be used to generate a \"global\" README.md that contains a link to all generated targets.
| *Target* |\n|------------------------------------------------------------------------|\n{% for target in inventory_global | sort() %}\n{% set p = inventory_global[target].parameters %}\n|[{{target}}](../{{target}}/docs/README.md) |\n{% endfor %}\n
{{ hello world | regex_replace(pattern=\"world\", replacement=\"kapitan\")}}
escape all regular expressions special characters from string
{{ \"+s[a-z].*\" | regex_escape}}
perform re.search and return the list of matches or a backref
{{ hello world | regex_search(\"world.*\")}}
perform re.findall and return the list of matches as array
{{ hello world | regex_findall(\"world.*\")}}
return list of matched regular files for glob
{{ ./path/file* | fileglob }}
return the bool for value
{{ yes | bool }}
value ? true_val : false_val
{{ condition | ternary(\"yes\", \"no\")}}
randomly shuffle elements of a list
{{ [1, 2, 3, 4, 5] | shuffle }}
reveal ref/secret tag only if compile --reveal flag is set
{{ \"?{base64:my_ref}\" | reveal_maybe}}
Tip
You can also provide path to your custom filter modules in CLI. By default, you can put your filters in lib/jinja2_filters.py and they will automatically get loaded.
"},{"location":"pages/input_types/jsonnet/","title":"Input Type | Jsonnet","text":"
Jsonnet is a superset of json format that includes features such as conditionals, variables and imports. Refer to jsonnet docs to understand how it works.
Note: unlike jinja2 templates, one jsonnet template can output multiple files (one per object declared in the file).
"},{"location":"pages/input_types/jsonnet/#accessing-the-inventory","title":"Accessing the inventory","text":"
Typical jsonnet files would start as follows:
local kap = import \"lib/kapitan.libjsonnet\"; #(1)!\nlocal inv = kap.inventory(); #(2)!\nlocal p = inv.parameters; #(3)!\n\n{\n \"data_java_opts\": p.elasticsearch.roles.data.java_opts, #(4)!\n}\n
Import the Kapitan inventory library.
Assign the content of the full inventory for this specific target to the inv variable.
Assign the content of the inventory.parameters to a variable p for convenience.
Use the p variable fo access a specific intentory value
Note: The dictionary keys of the jsonnet object are used as filenames for the generated output files. If your jsonnet is not a dictionary, but is a valid json(net) object, then the output filename will be the same as the input filename. E.g. 'my_string' is inside templates/input_file.jsonnet so the generated output file will be named input_file.json for example and will contain \"my_string\".
If validation.valid is not true, it will then fail compilation and display validation.reason.
Fails validation because storage has an invalid pattern (10Z)
Jsonnet error: failed to compile /code/components/mysql/main.jsonnet:\nRUNTIME ERROR: '10Z' does not match '^[0-9]+[MGT]{1}$'\n\nFailed validating 'pattern' in schema['properties']['storage']:\n {'pattern': '^[0-9]+[MGT]{1}$', 'type': 'string'}\n\nOn instance['storage']:\n '10Z'\n\n/code/mysql/main.jsonnet:(19:1)-(43:2)\n\nCompile error: failed to compile target: minikube-mysql\n
"},{"location":"pages/input_types/kadet/","title":"Input Type | Kadet","text":"
Kadet is an extensible input type for Kapitan that enables you to generate templates using Python.
The key benefit being the ability to utilize familiar programing principles while having access to Kapitan's powerful inventory system.
A library that defines resources as classes using the Base Object class is required. These can then be utilized within components to render output.
The following functions are provided by the class BaseObj().
Method definitions:
new(): Provides parameter checking capabilities
body(): Enables in-depth parameter configuration
Method functions:
root(): Defines values that will be compiled into the output
need(): Ability to check & define input parameters
update_root(): Updates the template file associated with the class
A class can be a resource such as a Kubernetes Deployment as shown here:
The deployment is an BaseObj() which has two main functions.
new(self) is used to perform parameter validation & template compilation
body(self) is utilized to set those parameters to be rendered.
self.root.metadata.name is a direct reference to a key in the corresponding yaml.
Kadet supports importing libraries as you would normally do with Python. These libraries can then be used by the components to generate the required output.
We import a library called kubelib using load_from_search_paths()
We use kubelib to create a Container
We create an output of type BaseObj and we will be updating the root element of this output.
We use kubelib to create a Deployment kind. The Deployment makes use of the Container created.
We use kubelib to create a Service kind.
We return the object. Kapitan will render everything under output.root
Kadet uses a library called addict to organise the parameters inline with the yaml templates. As shown above we create a BaseObject() named output. We update the root of this output with the data structure returned from kubelib. This output is what is then returned to kapitan to be compiled into the desired output type.
For a deeper understanding please refer to github.com/kapicorp/kadet
This input type simply removes files or directories. This can be helpful if you can't control particular files generated during other compile inputs.
For example, to remove a file named copy_target, specify an entry to input_paths, compiled/${kapitan:vars:target}/copy_target.
parameters:\n target_name: removal\n kapitan:\n vars:\n target: ${target_name}\n compile:\n - input_type: copy\n input_paths:\n - copy_target\n output_path: .\n # test removal of a file\n - input_type: remove\n input_paths:\n - compiled/${kapitan:vars:target}/copy_target\n output_path: .\n
As a reminder, each input block within the compile array is run sequentially for a target in Kapitan. If we reversed the order of the inputs above like so:
The next thing you want to learn about the inventory are classes. A class is a yaml file containing a fragment of yaml that we want to import and merge into the inventory.
Classes are fragments of yaml: feature sets, commonalities between targets. Classes let you compose your Inventory from smaller bits, eliminating duplication and exposing all important parameters from a single, logically organised place. As the Inventory lets you reference other parameters in the hierarchy, classes become places where you can define something that will then get referenced from another section of the inventory, allowing for composition.
Classes are organised under the inventory/classes directory substructure. They are organised hierarchically in subfolders, and the way they can be imported into a target or other classes depends on their location relative to the inventory/classes directory.
Notice that this class includes an import definition for another class, kapitan.common. We've already learned this means that kapitan will import a file on disk called inventory/classes/kapitan/common.yml
You can also see that in the parameters section we now encounter a new syntax which unlocks another powerful inventory feature: parameters interpolation!
The Inventory is a core component of Kapitan: this section aims to explain how it works and how to best take advantage of it.
The Inventory is a hierarchical YAML based structure which you use to capture anything that you want to make available to Kapitan, so that it can be passed on to its templating engines.
The first concept to learn about the Inventory is the target. A target is a file, found under the inventory/targets substructure, that tells Kapitan what you want to compile. It will usually map to something you want to do with Kapitan.
For instance, you might want to define a target for each environment that you want to deploy using Kapitan.
The Inventory lets you also define and reuse common configurations through YAML files that are referred to as classes: by listing classes into target, their content gets merged together and allows you to compose complex configurations without repetitions.
By combining target and classes, the Inventory becomes the SSOT for your whole configuration, and learning how to use it will unleash the real power of Kapitan.
Info
The Kapitan Inventory is based on an open source project called reclass and you can find the full documentation on our Github clone. However we discourage you to look directly at the reclass documentation before you learn more about Kapitan, because Kapitan uses a fork of reclass and greatly simplifies the reclass experience.
Info
Kapitan allows users to switch the inventory backend to reclass-rs. You can switch the backend to reclass-rs by passing --inventory-backend=reclass-rs on the command line. Alternatively, you can define the backend in the .kapitan config file.
See the reclass-rs inventory backend documentation for more details.
Note
Kapitan enforces very little structure for the Inventory, so that you can adapt it to your specific needs: this might be overwhelming at the beginning: don\u2019t worry, we will explain best practice and give guidelines soon.
By default, Kapitan will search for its Inventory under inventory/classes and inventory/targets.
namespace should take the same value defined in target_name
target_name should take the literal string dev
application.location should take the same value as defined in cluster.location
It is important to notice that the inventory can refer to values defined in other classes, as long as they are imported by the target. So for instance with the following example
Here in this case application.location refers to a value location which has been defined elsewhere, perhaps (but not necessarily) in the project.production class.
Also notice that the class name (project.production) is not in any ways influencing the name or the structed of the yaml it imports into the file
Reclass-rs is a reimplementation of Kapitan's Reclass fork in Rust. Please note that the Rust implementation doesn't support all the features of Kapitan's Reclass fork yet.
However, reclass-rs improves rendering time for the inventory significantly, especially if you're making heavy use of parameter references in class includes. If some of the Reclass features or options that you're using are missing in reclass-rs, don't hesitate to open an issue in the reclass-rs project.
To use the reclass-rs inventory backend, you need to pass --inventory-backend=reclass-rs on the command line. If you want to permanently switch to the reclass-rs inventory backend, you can select the inventory backend in the .kapitan config file:
For the performance comparison, a real Kapitan inventory which makes heavy use of parameter interpolation in class includes was rendered with both Reclass and reclass-rs. The example inventory that was used for the performance comparison contains 325 classes and 56 targets. The example inventory renders to a total of 25MB of YAML.
$ time kapitan inventory -v --inventory-backend=reclass > inv.yml\n[ ... some output omitted ... ]\nkapitan.resources DEBUG Using reclass as inventory backend\nkapitan.inventory.backends.reclass DEBUG Inventory reclass: No config file found. Using reclass inventory config defaults\nkapitan.inventory.backends.reclass DEBUG Inventory rendering with reclass took 0:01:06.037057\n\nreal 1m23.840s\nuser 1m23.520s\nsys 0m0.287s\n
Reclass takes 1 minute and 6 seconds to render the example inventory. The rest of the runtime (roughly 18 seconds) is spent in writing the resulting 25MB of YAML to the output file.
$ time kapitan inventory -v --inventory-backend=reclass-rs > inv-rs.yml\n[ ... some output omitted ... ]\nkapitan.resources DEBUG Using reclass-rs as inventory backend\nkapitan.inventory.backends.reclass DEBUG Inventory reclass: No config file found. Using reclass inventory config defaults\nreclass-config.yml entry 'storage_type=yaml_fs' not implemented yet, ignoring...\nreclass-config.yml entry 'inventory_base_uri=./inventory' not implemented yet, ignoring...\nreclass-config.yml entry 'allow_none_override=true' not implemented yet, ignoring...\nkapitan.inventory.backends.reclass_rs DEBUG Inventory rendering with reclass-rs took 0:00:01.717107\n\nreal 0m19.921s\nuser 0m35.586s\nsys 0m1.066s\n
reclass-rs takes 1.7 seconds to render the example inventory. The rest of the runtime (roughly 18 seconds) is spent in writing the resulting 25MB of YAML to the output file.
A target is a file that lives under the inventory/targets subdirectory, and that tells Kapitan what you want it to do for you.
Kapitan will recognise all YAML files in the inventory/targets subtree as targets.
Note
Only use .yml as extension for Inventory files. .yaml will not be recognised as a valid Inventory file.
What you do with a target is largely up to you and your setup. Common examples:
clusters: Map each target to a cluster, capturing all configurations needed for a given cluster. For instance: targets/clusters/production-cluster1.yml
applications: When using Kapitan to manage Kubernetes applications, you might define a target for everything that you would normally deploy in a single namespace, including all its resources, scripts, secrets and documentation. For instance: targets/mysql.yml
environments: You might have want to define a different target for each environment you have, like dev.yml, test.yml and prod.yml
cloud projects: When working with Terraform, it may be convenient to group target by cloud project. For instance: targets/gcp/projects/engineering-prod.yml.
single tenancy: When deploying a single-tenancy application, you might combine the approaches above, and have a target acme.yml that is used to define both Terraform and Kubernetes resources for a given tenant, perhaps also with some ArgoCD or Spinnaker pipelines to go with it.
Example
If you have configured your kapitan repository like in Quick Start instructions, you can run the commands we give during the course of this documentation.
When you run kapitan compile, you instruct Kapitan to generate for each given target a directory under compiled with the same name. Under this directory you will find all the files that have been generated by Kapitan for that target.
classes is a list of class files you will want to import.
parameters allows for local override of what is unique to this target.
Info
the kapitan key under the root parameters is reserved for kapitan usage. Some examples:
parameters:\n kapitan:\n compile: # input types configuration section\n dependencies: # dependencies configuration section to download resources\n secrets: # secret encryption/decryption configuration section\n validate: # items which indicate which compiled output to validate\n vars: # which are also passed down to input types as context\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Kapitan: Keep your ship together","text":"
Kapitan aims to be your one-stop configuration management solution to help you manage the ever growing complexity of your configurations by enabling Platform Engineering and GitOps workflows.
It streamlines complex deployments across heterogeneous environments while providing a secure and adaptable framework for managing infrastructure configurations. Kapitan's inventory-driven model, powerful templating capabilities, and native secret management tools offer granular control, fostering consistency, reducing errors, and safeguarding sensitive data.
Empower your team to make changes to your infrastructure whilst maintaining full control, with a GitOps approach and full transparency.
Join the community #kapitan
Help us grow: give us a star or even better sponsor our project
"},{"location":"#why-do-i-need-kapitan","title":"Why do I need Kapitan?","text":""},{"location":"#video-tutorials-to-get-started","title":"Video Tutorials to get started","text":"
Kapitan Youtube Channel
InventoryReferencesHelm and Generators integrationRawkode: Introduction to Kapitan "},{"location":"ADOPTERS/","title":"Who uses Kapitan","text":"
If you're using Kapitan in your organization, please let us know by adding to this list on the docs/ADOPTERS.md file.
"},{"location":"FAQ/","title":"FAQ","text":""},{"location":"FAQ/#why-do-i-need-kapitan","title":"Why do I need Kapitan?","text":"
See Why do I need Kapitan?
"},{"location":"FAQ/#ask-your-question","title":"Ask your question","text":"
Please use the comments facility below to ask your question
"},{"location":"getting_started/","title":"Kapitan Overview","text":""},{"location":"getting_started/#setup-your-installation","title":"Setup your installation","text":"
Using our reference repositories you can easily get started with Kapitan
kapicorp/kapitan-reference repository is meant show you many working examples of things you can do with Kapitan. You can use this to get familiar with Kapitan
"},{"location":"getting_started/#install-kapitan-using-pip","title":"Install Kapitan using pip","text":""},{"location":"getting_started/#user","title":"User","text":"LinuxMac
kapitan will be installed in $HOME/.local/lib/python3.7/bin
pip3 install --user --upgrade kapitan\n
kapitan will be installed in $HOME/Library/Python/3.7/bin
Proposals can be submitted for review by performing a pull request against this repository. If approved the proposal will be published here for further review by the Kapitan community. Proposals tend to be improvements or design consideration for new features.
One of the motivations behing Kapitan's design is that we believe that everything about your setup should be tracked, and Kapitan takes this to the extreme. Sometimes, however, we have to manage values that we do not think they belong to the Inventory: perhaps they are either too variable (for instance, a Git commit sha that changes with every build) or too sensitive, like a password or a generic secret, and then they should always be encrypted.
Kapitan has a built in support for References, which you can use to manage both these use cases.
Kapitan References supports the following backends:
Backend Description Encrypted plain Plain text, (e.g. commit sha) base64 Base64, non confidential but with base64 encoding gpg Support for https://gnupg.org/ gkms GCP KMS awskms AWS KMS azkms Azure Key Vault env Environment vaultkv Hashicorp Vault (RO) vaulttransit Hashicorp Vault (encrypt, decrypt, update_key, rotate_key)"},{"location":"references/#setup","title":"Setup","text":"
Some reference backends require configuration, both in the Inventory and to configure the actual backend.
Get started
If you want to get started with references but don't want to deal with the initial setup, you can use the plain and base64 reference types. These are great for demos, but we will see they are extremely helpful even in Production environments.
Danger
Both plain and base64 references do not support encryption: they are intended for development or demo purposes only. DO NOT use plain or base64 for storing sensitive information!
Backend configuration
Configuration for each backend varies, and it is perfomed by configuring the inventory under parameters.kapitan.secrets.
Now because they both set the parameters.backend variable, you can define a reference whose backend changes based on what class is assigned to the target
inventory/targets/cloud/gcp/acme.yml
classes:\n- cloud.aws\n\nparameters:\n ...\n mysql:\n # the secret backend will change based on the cloud assigned to this target\n root_password: ?{${backend}:targets/${target_name}/mysql/root_password}\n ...\n
The env backend works in a slightly different ways, as it allows you to reference environment variables at runtime.
For example, for a reference called {?env:targets/envs_defaults/mysql_port_${target_name}}, Kapitan would look for an environment variable called KAPITAN_ENV_mysql_port_${TARGET_NAME}.
If that variable cannot be found in the Kapitan environment, the default will be taken from the refs/targets/envs_defaults/mysql_port_${TARGET_NAME} file instead.
Kapitan has built in capabilities to initialise its references on creation, using an elegant combination of primary and secondary functions. This is extremely powerful because it allows for you to make sure they are always initialised with sensible values.
To automate the creation of the reference, you can add one of the following primary functions to the reference tag by using the syntax ||primary_function:param1:param2
For instance, to automatically initialise a reference with a random string with a lenght of 32 characters, you can use the random primary function
The first operator here || is more similar to a logical OR.
If the reference file does not exist, Kapitan will use the function to initialise it
If the reference file exists, no functions will run.
Automate secret rotation with ease
You can take advantage of it to implement easy rotation of secrets. Simply delete the reference files, and run kapitan compile: let Kapitan do the rest.
If you use reveal to initialise a reference, like my_reference||reveal:source_reference the my_reference will not be automatically updated if source_reference changes. Please make sure you also re-initialise my_reference correctly
"},{"location":"references/#using-subvars-to-ingest-yaml-from-command-line-tools","title":"Using subvars to ingest yaml from command line tools","text":"
Subvars can have a very practical use for storing YAML outputs coming straight from other tools. For instance, I could use the GCP gcloud command to get all the information about a cluster, and write it into a reference
Combined with a Jinja template, I could write automatically documentation containing the details of the clusters I use.
```text\n{% set p = inventory.parameters %}\n# Documentation for {{p.target_name}}\n\nCluster [{{p.cluster.name}}]({{p.cluster.link}}) has release channel {{p.cluster.release_channel}}\n```\n
Considering a key-value pair like my_key:my_secret in the path secret/foo/bar in a kv-v2(KV version 2) secret engine on the vault server, to use this as a secret use:
Leave mount empty to use the specified mount from vault params from the inventory (see below). Same applies to the path/in/vault where the ref path in kapitan gets taken as default value.
Parameters in the secret file are collected from the inventory of the target we gave from CLI -t <target_name>. If target isn't provided then kapitan will identify the variables from the environment when revealing secret.
The environment variables which can also be defined in kapitan inventory are VAULT_ADDR, VAULT_NAMESPACE, VAULT_SKIP_VERIFY, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY, VAULT_CAPATH & VAULT_CACERT. Note that providing these variables through the inventory in envvar style is deprecated. Users should update their inventory to set these values in keys without the VAULT_ prefix and in all lowercase. For example VAULT_ADDR: https://127.0.0.1:8200 should be given as addr: https://127.0.0.1:8200 in the inventory. Please note that configuring one of these values in both kapitan.secrets.vaultkv in the inventory and in the environment will cause a validation error.
Extra parameters that can be defined in inventory are:
auth: specify which authentication method to use like token,userpass,ldap,github & approle
mount: specify the mount point of key's path. e.g if path=alpha-secret/foo/bar then mount: alpha-secret (default secret)
engine: secret engine used, either kv-v2 or kv (default kv-v2)
The environment variables which cannot be defined in inventory are VAULT_TOKEN,VAULT_USERNAME,VAULT_PASSWORD,VAULT_ROLE_ID,VAULT_SECRET_ID.
Considering a key-value pair like my_key:my_secret in the path secret/foo/bar in a transit secret engine on the vault server, to use this as a secret use:
Parameters in the secret file are collected from the inventory of the target we gave from CLI -t <target_name>. If target isn't provided then kapitan will identify the variables from the environment when revealing secret.
Environment variables that can be defined in kapitan inventory are VAULT_ADDR, VAULT_NAMESPACE, VAULT_SKIP_VERIFY, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY, VAULT_CAPATH & VAULT_CACERT. Extra parameters that can be defined in inventory are:
auth: specify which authentication method to use like token,userpass,ldap,github & approle
mount: specify the mount point of key's path. e.g if path=my_mount (default transit)
crypto_key: Name of the encryption key defined in vault
always_latest: Always rewrap ciphertext to latest rotated crypto_key version Environment variables cannot be defined in inventory are VAULT_TOKEN,VAULT_USERNAME,VAULT_PASSWORD,VAULT_ROLE_ID,VAULT_SECRET_ID.
To encrypt secrets using keys stored in Azure's Key Vault, a key_id is required to identify an Azure key object uniquely. It should be of the form https://{keyvault-name}.vault.azure.net/{object-type}/{object-name}/{object-version}.
"},{"location":"references/#defining-the-kms-key","title":"Defining the KMS key","text":"
This is done in the inventory under parameters.kapitan.secrets.
This introduces a new experimental input type called Kadet.
Kadet is essentially a Python module offering a set of classes and functions to define objects which will compile to JSON or YAML. A complete example is available in examples/kubernetes/components/nginx.
BaseObj implements the basic object implementation that compiles into JSON or YAML. Setting keys in self.root means they will be in the compiled output. Keys can be set as an hierarchy of attributes (courtesy of addict) The self.body() method is reserved for setting self.root on instantiation:
The self.new() method can be used to define a basic constructor. self.need() checks if a key is set and errors if it isn't (with an optional custom error message). kwargs that are passed onto a new instance of BaseObj are always accessible via self.kwargs In this example, MyApp needs name and foo to be passed as kwargs.
class MyApp(BaseObj):\n def new(self):\n self.need(\"name\")\n self.need(\"foo\", msg=\"please provide a value for foo\")\n\n def body(self):\n self.root.name = self.kwargs.name\n self.root.inner.foo = self.kwargs.foo\n self.root.list = [1, 2, 3]\n\nobj = MyApp(name=\"myapp\", foo=\"bar\")\n
","tags":["kubernetes","kadet"]},{"location":"kap_proposals/kap_0_kadet/#setting-a-skeleton","title":"Setting a skeleton","text":"
Defining a large body with Python can be quite hard and repetitive to read and write. The self.update_root() method allows importing a YAML/JSON file to set the skeleton of self.root.
class MyApp(BaseObj):\n def new(self):\n self.need(\"name\")\n self.need(\"foo\", msg=\"please provide a value for foo\")\n self.update_root(\"path/to/skel.yml\")\n
Extending a skeleton'd MyApp is possible just by implementing self.body():
class MyApp(BaseObj):\n def new(self):\n self.need(\"name\")\n self.need(\"foo\", msg=\"please provide a value for foo\")\n self.update_root(\"path/to/skel.yml\")\n\n def body(self):\n self.set_replicas()\n self.root.metadata.labels = {\"app\": \"mylabel\"}\n\ndef set_replicas(self):\n self.root.spec.replicas = 5\n
A component in Kadet is a python module that must implement a main() function returning an instance ofBaseObj. The inventory is also available via the inventory() function.
For example, a tinyapp component:
# components/tinyapp/__init__.py\nfrom kapitan.inputs.kadet import BaseOBj, inventory\ninv = inventory() # returns inventory for target being compiled\n\nclass TinyApp(BaseObj):\n def body(self):\n self.root.foo = \"bar\"\n self.root.replicas = inv.parameters.tinyapp.replicas\n\ndef main():\n obj = BaseOb()\n obj.root.deployment = TinyApp() # will compile into deployment.yml\n return obj\n
A library in --search-paths (which now defaults to . and lib/) can also be a module that kadet components import. It is loaded using the load_from_search_paths():
","tags":["kubernetes","kadet"]},{"location":"kap_proposals/kap_10_azure_key_vault/","title":"Support for Azure Key Management","text":"
This feature will enable users to encrypt secrets using keys stored in Azure's Key Vault. The azkms keyword will be used to access the azure key management backend.
key_id uniquely identifies an Azure key object and it's version stored in Key Vault. It is of the form https://{keyvault-name}.vault.azure.net/{object-type}/{object-name}/{object-version}. It needs to be made accessible to kapitan in one of the following ways:
note Cryptographic algorithm used for encryption would be rsa-oaep-256. Optimal Asymmetric Encryption Padding (OAEP) is a padding scheme often used together with RSA encryption.
"},{"location":"kap_proposals/kap_10_azure_key_vault/#referencing-a-secret","title":"referencing a secret","text":"
Secrets can be refered using ?{azkms:path/to/secret_id} e.g.
The following variables need to be exported to the environment(depending on authentication used) where you will run kapitan refs --reveal in order to authenticate to your HashiCorp Vault instance:
VAULT_ADDR: URL for vault
VAULT_SKIP_VERIFY=true: if set, do not verify presented TLS certificate before communicating with Vault server. Setting this variable is not recommended except during testing
VAULT_TOKEN: token for vault or file (~/.vault-tokens)
VAULT_ROLE_ID: required by approle
VAULT_SECRET_ID: required by approle
VAULT_USERNAME: username to login to vault
VAULT_PASSWORD: password to login to vault
VAULT_CLIENT_KEY: the path to an unencrypted PEM-encoded private key matching the client certificate
VAULT_CLIENT_CERT: the path to a PEM-encoded client certificate for TLS authentication to the Vault server
VAULT_CACERT: the path to a PEM-encoded CA cert file to use to verify the Vault server TLS certificate
VAULT_CAPATH: the path to a directory of PEM-encoded CA cert files to verify the Vault server TLS certificate
VAULT_NAMESPACE: specify the Vault Namespace, if you have one
Considering any stringdata like any.value:whatever-you_may*like ( in our case let\u2019s encrypt any.value:whatever-you_may*like with vault transit ) using the key 2022-02-13-test in a transit secret engine with mount mytransit on the vault server, to use this as a secret either follow:
The entire string \"any.value:whatever-you_may*like\" will be encrypted by vault and looks like this in return: vault:v2:Jhn3UzthKcJ2s+sEiO60EUiDmuzqUC4mMBWp2Vjg/DGl+GDFEDIPmAQpc5BdIefkplb6yrJZq63xQ9s=. This then gets base64 encoded and stored in the secret_inside_kapitan. Now secret_inside_kapitan contains the following
Encoding tells the type of data given to kapitan, if it is original then after decoding base64 we'll get the original secret and if it is base64 then after decoding once we still have a base64 encoded secret and have to decode again. Parameters in the secret file are collected from the inventory of the target we gave from CLI --target my_target. If target isn't provided then kapitan will identify the variables from the environment, but providing auth is necessary as a key inside target parameters like the one shown:
Environment variables that can be defined in kapitan inventory are VAULT_ADDR, VAULT_NAMESPACE, VAULT_SKIP_VERIFY, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY, VAULT_CAPATH & VAULT_CACERT. Extra parameters that can be defined in inventory are:
auth: specify which authentication method to use like token,userpass,ldap,github & approle
mount: specify the mount point of key's path. e.g if path=alpha-secret/foo/bar then mount: alpha-secret (default secret)
crypto_key: Name of the encryption key defined in vault
always_latest: Always rewrap ciphertext to latest rotated crypto_key version Environment variables should NOT be defined in inventory are VAULT_TOKEN,VAULT_USERNAME,VAULT_PASSWORD,VAULT_ROLE_ID,VAULT_SECRET_ID. This makes the secret_inside_kapitan file accessible throughout the inventory, where we can use the secret whenever necessary like ?{vaulttransit:${target_name}/secret_inside_kapitan}
Following is the example file having a secret and pointing to the vault ?{vaulttransit:${target_name}/secret_inside_kapitan}
when ?{vaulttransit:${target_name}/secret_inside_kapitan} is compiled, it will look same with an 8 character prefix of sha256 hash added at the end like:
Only the user with the required tokens/permissions can reveal the secrets. Please note that the roles and permissions will be handled at the Vault level. We need not worry about it within Kapitan. Use the following command to reveal the secrets:
The output path is the path to save the dependency into. For example, it could be /components/external/manifest.jsonnet. Then, the user can specify the fetched file as a kapitan.compile item along with the locally-created files.
Git type may also include ref and subdir parameters as illustrated below:
If the file already exists at output_path, the fetch will be skipped. For fresh fetch of the dependencies, users may add --fetch option as follows:
kapitan compile --fetch\n
Users can also add the force_fetch: true option to the kapitan.dependencies in the inventory in order to force fetch of the dependencies of the target every time.
This will allow kapitan, during compilation, to overwrite the values in user-specified helm charts using its inventory by calling the Go & Sprig template libraries. The helm charts can be specified via local path, and users may download the helm chart via external-dependency feature (of http[s] type).
This feature basically follows the helm template command available. This will run after the fetching of the external dependencies takes place, such that users can simultaneously specify the fetch as well as the import of a helm chart dependency.
C-binding between Helm (Go) and Kapitan (Python) will be created. Helm makes use of two template libraries, namely, text/template and Sprig. The code for helm template command will be converted into shared object (.so) using CGo, which exposes C interface that kapitan (i.e. CPython) could use. The source code for helm template command is found here. This file will be modified to
If a yaml/json output is to be used as k8s manifest, users may specify its kind and have kapitan validate its structure during kapitan compile. The plan is to have this validation feature extendable to other outputs as well, such as terraform.
Create a portable (i.e. static) kapitan binary for users. This executable will be made available for each release on Github. The target/tested platform is Debian 9 (possibly Windows to be supported in the future).
Criteria:
speed of the resulting binary
size of the resulting binary
portability of the binary (single-file executable or has an accompanying folder)
cross-platform
actively maintained
supports Python 3.6, 3.7
Author: @yoshi-1224
"},{"location":"kap_proposals/kap_4_standalone_executable/#tools-to-be-explored","title":"Tools to be explored","text":"
(tentative first-choice) Pyinstaller
(Alternative) nuitka (also part of GSoC 2019. It might soon support single-file executable output).
Rename Secrets into Ref (or References) to improve consistency and meaning of the backend types by removing the ref backend and introducting new backends:
Type Description Encrypted? Compiles To gpg GnuPG Yes hashed tag gkms Google KMS Yes hashed tag awskms Amazon KMS Yes hashed tag base64 base64 No hashed tag plain plain text No plain text
The type value will now need to be representative of the way a reference is stored via its backend.
A new plain backend type is introduced and will compile into revealed state instead of a hashed tag.
A new base64 backend type will store a base64 encoded value as the backend suggests (replacing the old badly named ref backend).
Type Description Encrypted? Compiles To gpg GnuPG Yes hashed tag gkms Google KMS Yes hashed tag awskms Amazon KMS Yes hashed tag ref base64 No hashed tag
However, not all backends are encrypted - this is not consistent!
The ref type is not encrypted as its purpose is to allow getting started with the Kapitan Secrets workflow without the need of setting up the encryption backends tooling (gpg, gcloud, boto, etc...)
The following variables need to be exported to the environment(depending on authentication used) where you will run kapitan refs --reveal in order to authenticate to your HashiCorp Vault instance:
VAULT_ADDR: URL for vault
VAULT_SKIP_VERIFY=true: if set, do not verify presented TLS certificate before communicating with Vault server. Setting this variable is not recommended except during testing
VAULT_TOKEN: token for vault or file (~/.vault-tokens)
VAULT_ROLE_ID: required by approle
VAULT_SECRET_ID: required by approle
VAULT_USERNAME: username to login to vault
VAULT_PASSWORD: password to login to vault
VAULT_CLIENT_KEY: the path to an unencrypted PEM-encoded private key matching the client certificate
VAULT_CLIENT_CERT: the path to a PEM-encoded client certificate for TLS authentication to the Vault server
VAULT_CACERT: the path to a PEM-encoded CA cert file to use to verify the Vault server TLS certificate
VAULT_CAPATH: the path to a directory of PEM-encoded CA cert files to verify the Vault server TLS certificate
VAULT_NAMESPACE: specify the Vault Namespace, if you have one
Considering a key-value pair like my_key:my_secret ( in our case let\u2019s store hello:batman inside the vault ) in the path secret/foo in a kv-v2(KV version 2) secret engine on the vault server, to use this as a secret either follow:
Encoding tells the type of data given to kapitan, if it is original then after decoding base64 we'll get the original secret and if it is base64 then after decoding once we still have a base64 encoded secret and have to decode again. Parameters in the secret file are collected from the inventory of the target we gave from CLI --target dev-sea. If target isn't provided then kapitan will identify the variables from the environment, but providing auth is necessary as a key inside target parameters like the one shown:
Environment variables that can be defined in kapitan inventory are VAULT_ADDR, VAULT_NAMESPACE, VAULT_SKIP_VERIFY, VAULT_CLIENT_CERT, VAULT_CLIENT_KEY, VAULT_CAPATH & VAULT_CACERT. Extra parameters that can be defined in inventory are:
auth: specify which authentication method to use like token,userpass,ldap,github & approle
mount: specify the mount point of key's path. e.g if path=alpha-secret/foo/bar then mount: alpha-secret (default secret)
engine: secret engine used, either kv-v2 or kv (default kv-v2) Environment variables cannot be defined in inventory are VAULT_TOKEN,VAULT_USERNAME,VAULT_PASSWORD,VAULT_ROLE_ID,VAULT_SECRET_ID. This makes the secret_inside_kapitan file accessible throughout the inventory, where we can use the secret whenever necessary like ?{vaultkv:path/to/secret_inside_kapitan}
Following is the example file having a secret and pointing to the vault ?{vaultkv:path/to/secret_inside_kapitan}
Only the user with the required tokens/permissions can reveal the secrets. Please note that the roles and permissions will be handled at the Vault level. We need not worry about it within Kapitan. Use the following command to reveal the secrets:
This feature would add the ability to Kapitan to fetch parts of the inventory from remote locations (https/git). This would allow users to combine different inventories from different sources and build modular infrastructure reusable across various repos.
On executing the $ kapitan compile --fetch command, first the remote inventories will be fetched followed by fetching of external dependencies and finally merge the inventory to compile.
"},{"location":"kap_proposals/kap_7_remote_inventory/#copying-inventory-files-to-the-output-location","title":"Copying inventory files to the output location","text":"
The output path is the path to save the inventory items into. The path is relative to the inventory/ directory. For example, it could be /classes/. The contents of the fetched inventory will be recursively copied.
The fetched inventory files will be cached in the .dependency_cache directory if --cache is set. eg. $ kapitan compile --fetch --cache
"},{"location":"kap_proposals/kap_8_google_secret_management/#using-a-secret","title":"Using a secret","text":"
In GCP, a secret contains one or more secret versions, along with its metadata. The actual contents of a secret are stored in a secret version. Each secret is identified by a name. We call that variable secret_id e.g. my_treasured_secret. The URI of the secret becomes projects/<Project_Id>/secrets/my_treasured_secret
The following command will be used to add a secret_id to kapitan.
Kapitan is packaged in PYPI and as a binary along with all its dependencies. Adding an extra key/security backend means that we need to ship another dependency with that PYPI package, making deploying changes more complicated. This project would modularize kapitan into core dependencies and extra modules.
The main module includes the essential kapitan dependencies and reclass dependencies, which will be included in the \u200brequirement.txt\u200b file.
The extra module pypi extras will be defined in the s\u200betup.py\u200b file.
The extra dependencies are of secret backends like (AWS Key backend, Google KMS Key backend, Vault Key backend etc.) and Helm support.
"},{"location":"kap_proposals/kap_9_bring_your_own_helm/","title":"Bring Your Own Helm Proposal","text":""},{"location":"kap_proposals/kap_9_bring_your_own_helm/#the-problem","title":"The Problem","text":"
Currently the helm binding can't be run on Mac OSX. Attempts to fix this have been made on several occasions:
https://github.com/kapicorp/kapitan/pull/414
https://github.com/kapicorp/kapitan/pull/547
https://github.com/kapicorp/kapitan/pull/568
There are some issues with the current bindings besides the lack of Mac OSX support. The golang runtime (1.14) selected will effect older versions helm templates: https://github.com/helm/helm/issues/7711. Users can't select the version of helm they'd like to use for templating.
Users supply their own helm binary. This allows them to control the version of golang runtime and version of helm they'd like to use.
In Kapitan we could rewrite the interface to use subprocess and perform commands. The cli of helm 2 vs helm 3 is slightly different but shouldn't be difficult to codify.
This would be great to get rid of cffi and golang which will reduce complexity and build time of the project.
Depending on how this goes, this could pave the way for a \"bring your own binary\" input type.
"},{"location":"pages/core_concepts/","title":"Kapitan Overview","text":""},{"location":"pages/core_concepts/#kapitan-at-a-glance","title":"Kapitan at a glance","text":"
Kapitan is a powerful configuration management tool designed to help engineers manage complex systems through code. It centralizes and simplifies the management of configurations with a structured approach that revolves around a few core concepts.
At the core of Kapitan lies the Inventory, a structured database of variables meticulously organized in YAML files.
This hierarchical setup serves as the single source of truth (SSOT) for your system's configurations, making it easier to manage and reference the essential components of your infrastructure.
Whether you're dealing with Kubernetes configurations, Terraform resources, or even business logic, the Inventory allows you to define and store these elements efficiently. This central repository then feeds into Kapitan's templating engines, enabling seamless reuse across various applications and services.
Kapitan takes the information stored in the Inventory and brings it to life through its templating engines upon compilation. Some of the supported input types are: Python, Jinja2, Jsonnet, Helm, and we're adding more soon.
This process transforms static data into dynamic configurations, capable of generating a wide array of outputs like Kubernetes manifests, Terraform plans, documentation, and scripts.
It's about making your configurations work for you, tailored to the specific needs of your projects.
Generators offer a straightforward entry point into using Kapitan, requiring minimal to no coding experience.
These are essentially pre-made templates that allow you to generate common configuration files, such as Kubernetes manifests, directly from your Inventory data.
Kapitan provides a wealth of resources, including the Kapitan Reference GitHub repository and various blog posts, to help users get up and running with generators.
For those looking to leverage the full power of Kapitan, Kadet introduces a method to define and reuse complex configurations through Python.
This internal library facilitates the creation of JSON and YAML manifests programmatically, offering a higher degree of customization and reuse. Kadet empowers users to craft intricate configurations with the simplicity and flexibility of Python.
Kapitan References provide a secure way to store passwords, settings, and other essential data within your project. Think of them as special code placeholders.
Flexibility: Update a password once, and Kapitan updates it everywhere automatically.
Organization: References tidy up your project, especially when you're juggling multiple settings or environments (dev, staging, production). Security: Protect sensitive information like passwords with encryption
Tip
Use Tesoro, our Kubernetes Admission Controller, to complete your integration with Kubernetes for secure secret decryption on-the-fly.
Say we want to fetch the source code from our kapitan repository, specifically, kapicorp/kapitan/kapitan/version.py. Let's create a very simple target file inventory/targets/kapitan-example.yml.
parameters:\n kapitan:\n vars:\n target: kapitan-example\n dependencies:\n - type: git\n output_path: source/kapitan\n source: git@github.com:kapicorp/kapitan.git\n subdir: kapitan\n ref: master\n submodules: true\n compile:\n - input_paths:\n - source/kapitan/version.py\n input_type: jinja2 # just to copy the file over to target\n output_path: .\n
Say we want to download kapitan README.md file. Since it's on Github, we can access it as https://raw.githubusercontent.com/kapicorp/kapitan/master/README.md. Using the following inventory, we can copy this to our target folder:
Fetches helm charts and any specific subcharts in the requirements.yaml file.
helm_path can be used to specify where the helm binary name or path. It defaults to the value of the KAPITAN_HELM_PATH environment var or simply to helm if neither is set. You should specify only if you don't want the default behavior.
source can be either the URL to a chart repository, or the URL to a chart on an OCI registry (supported since Helm 3.8.0).
If we want to download the prometheus helm chart we simply add the dependency to the monitoring target. We want a specific version 11.3.0 so we put that in.
Kapitan is capable of recursively fetching inventory items stored in remote locations and copy it to the specified output path. This feature can be used by specifying those inventory items in classes or targets under parameters.kapitan.inventory. Supported types are:
git type
http type
Class items can be specified before they are locally available as long as they are fetched in the same run. Example of this is given below.
Git types can fetch external inventories available via HTTP/HTTPS or SSH URLs. This is useful for fetching repositories or their sub-directories, as well as accessing them in specific commits and branches (refs).
Note: git types require git binary on your system.
Lets say we want to fetch a class from our kapitan repository, specifically kapicorp/kapitan/tree/master/examples/docker/inventory/classes/dockerfiles.yml.
Lets create a simple target file docker.yml
Note
external dependencies are used to fetch dependency items in this example.
[WARNING] Reclass class not found: 'dockerfiles'. Skipped!\n[WARNING] Reclass class not found: 'dockerfiles'. Skipped!\nInventory https://github.com/kapicorp/kapitan: fetching now\nInventory https://github.com/kapicorp/kapitan: successfully fetched\nInventory https://github.com/kapicorp/kapitan: saved to inventory/classes\nDependency https://github.com/kapicorp/kapitan: saved to components\nDependency https://github.com/kapicorp/kapitan: saved to templates\nCompiled docker (0.11s)\n
"},{"location":"pages/blog/","title":"Blog","text":""},{"location":"pages/blog/04/12/2022/kapitan-logo-5-years-of-kapitan/","title":"5 Years of Kapitan","text":"
Last October we quietly celebrated 5 years of Kapitan.
In 5 years, we've been able to witness a steady and relentless of Kapitan, which has however never caught the full attention of the majority of the community.
The main issue has always been around an embarassing lack of documentation, and we've worked hard to improve on that, with more updates due soon.
Let this first blog post from a revamped website be a promise to our community of a better effort in explaining what sets Kapitan apart, and makes it the only tool of its kind.
And let's start with a simple question: Why do you even need Kapitan?
Credits
In reality Kapitan's heatbeat started about 9 months earlier at DeepMind Health, created by [**Ricardo Amaro**](https://github.com/ramaro) with the help of some of my amazing team: in no particular order [Adrian Chifor](https://github.com/adrianchifor), [Paul S](https://github.com/uberspot) and [Luis Buriola](https://github.com/gburiola). It was then kindly released to the community by Google/DeepMind and is has so been improved thanks to more than [50 contributors](https://github.com/kapicorp/kapitan/graphs/contributors).\n
"},{"location":"pages/blog/04/12/2022/kapitan-logo-5-years-of-kapitan/#why-do-i-need-kapitan","title":"Why do I need Kapitan?","text":"
Kapitan is a hard sell, but a rewarding one. For these main reasons:
Kapitan solves problems that some don\u2019t know/think to have.
Some people by now have probably accepted the Status Quo and think that some suffering is part of their job descriptions.
Objectively, Kapitan requires an investment of effort to learn how to use a new tool, and this adds friction.
All I can say it is very rewarding once you get to use it, so stick with me while I try to explain the problems that Kapitan is solving
It would be reductive to list the problems that Kapitan solves, because sometimes we ourselves are stunned by what Kapitan is being used for, so I will start with some common relatable ones, and perhaps that will give you the right framing to understand how to use it with your setup.
In its most basic explanation, Kapitan solves the problem of avoiding duplication of configuration data: by consolidating it in one place (the Inventory), and making it accessible by all the tools and languages it integrates with (see Input Types).
This configuration data is then used by Kapitan (templates) to configure and operate a number of completely distinct and unaware tools which would normally not be able to share their configurations.
Let's consider the case where you want to define a new bucket, with a given bucket_name. Without Kapitan you would probably need to:
Write a PR on your Terraform repository to create the new bucket.
Which name should I use? Make sure to write it down! CTRL-C
Write a PR for your values.yaml file to configure your Helm chart: <CTRL-V>
Write somewhere some documentation to write down the bucket name and why it exists. Another <CTRL-V>
Another PR to change some **kustomize** configuration for another service to tell it to use the new bucket <CTRL-V>
Days after, time to upload something to that bucket: gsutil cp my_file wait_what_was_the_bucket_name_again.. Better check the documentation: CTRL-C + <CTRL-V>
When using Kapitan, your changes are likely to be contained within one PR, from which you can have a full view of everything that is happening. What happens is explained in this flow
\n%%{ init: { securityLevel: 'loose'} }%%\ngraph LR\n classDef pink fill:#f9f,stroke:#333,stroke-width:4px,color:#000,font-weight: bold;\n classDef blue fill:#00FFFF,stroke:#333,stroke-width:4px,color:#000,font-weight: bold;\n classDef bold color:#000,font-weight: bold;\n\n DATA --> KAPITAN\n BUCKET --> DATA\n KAPITAN --> KUBERNETES\n KAPITAN --> TERRAFORM\n KAPITAN --> DOCUMENTATION\n KAPITAN --> SCRIPT\n KAPITAN --> HELM\n KUBERNETES --> BUCKET_K8S\n TERRAFORM --> BUCKET_TF\n DOCUMENTATION --> BUCKET_DOC\n SCRIPT --> BUCKET_SCRIPT\n HELM --> BUCKET_HELM\n\n\n DATA[(\"All your data\")]\n BUCKET(\"bucket_name\")\n KAPITAN((\"<img src='/images/kapitan_logo.png'; width='150'/>\")):::blue\n\n\n subgraph \" \"\n KUBERNETES([\"Kubernetes\"]):::pink\n BUCKET_K8S(\".. a ConfigMap uses bucket_name\"):::bold\n end\n subgraph \" \"\n TERRAFORM([\"Terraform\"]):::pink\n BUCKET_TF(\"..creates the bucket bucket_name\"):::bold\n end\n subgraph \" \"\n DOCUMENTATION([\"Documentation\"]):::pink\n BUCKET_DOC(\"..references a link to bucket_name\"):::bold\n end\n subgraph \" \"\n SCRIPT([\"Canned Script\"]):::pink\n BUCKET_SCRIPT(\"..knows how to upload files to bucket_name\"):::bold\n end\n subgraph \" \"\n HELM([\"Helm\"]):::pink\n BUCKET_HELM(\"..configures a chart to use the bucket_name\"):::bold\n end
Thanks to its flexiblility, you can use Kapitan to generate all sorts of configurations: Kubernetes and Terraform resources, ArgoCD pipelines, Docker Compose files, random configs, scripts, documentations and anything else you find relevant. The trick is obviously on how to drive these changes, but it is not as complicated as it sounds. We'll get there soon enough!
Let's see now another example of things that are so established in the way to do things that become elusivly impossible to see. As a way to highlight the potential issues with this way of doing things, let's ask some questions on your current setup. We pick on Kubernetes this time.
I\u2019ll start with Kubernetes, such a popular and brilliant solution to problems most people should not be concerned with (jokes apart, I adore Kubernetes). To most, Kubernetes is that type of solution that quickly turns into a problem of its own right.
So.. how do you deploy to Kubernetes right now?
Helm comes to mind first, right?
Kapitan + Helm: BFF
In spite of Kapitan being initially considered (even by ourselves) as an alternative to Helm, we\u2019ve actually enjoyed the benefits of integrating with this amazing tool and the ecosystem it gives us access to. So yes, good news: you can use Helm right from within Kapitan!.
Well, let\u2019s put that to a test. How do you manage your Helm charts? I\u2019ll attempt to break these questions down into categories.
What about the official ones that you didn't create yourself?
How many values.yaml files do you have?
How much consistency is there between them? any snowflakes?
If you change something, like with the bucket_name example above:
how many places do you need to go and update?
And how many times do you get it wrong?
Don't you feel all your charts look the same?
Yet how many times do you need to deviate from the one you thought captured everything?
What if you need to make a change to all your charts at once: how do you deal with it?
What about configuration files, how do you deal with templating those?
How do you deal with \u201cofficial\u201d charts, do they always cover what you want to do?
How do you deal with modifications that you need to apply to your own version of a an official chart?
What if you need to make a change that affects ALL your charts?
Or if the change is for all the charts for a set of microservices?
How many times you find yourself seting parameters on the command line of Helm and other tools?
How many times did you connect to the wrong context in Kubernetes
How many of your colleagues have the same clean context setup as you have?
How many things are there that you wish you were tracking?
How do I connect to the production database? Which user is it again?
How easy is it for you to create a new environment from scratch?
Are you sure?
When was the last time you tried?
How easy is it to keep your configuration up to date?
Does your documentation need to be \u201cunderstood\u201d or can be just executed on?
How many conditionals like this do you have in your documentation?
NOTE: Cluster X in project Y has an older version of Q and requires you to do Z instead N because of A, B and C!
Would you be able to follow those instructions at 3am on a Sunday morning?
How do you handle secrets in your repository?
Do you know how to create your secrets from scratch?
Do you remember that token you created 4 months ago? How did you do that?
How long would it take you?
Is the process of creating them \u201csecure\u201d?
Or does it leave you with random certificates and tokens unencrypted on your \u201cDownloads\u201d folder?
The above concerns: do they also apply to other things you manage?
Terraform?
Pipelines?
Random other systems you interact with?
I\u2019ll stop here because I do not want to lose you, and neither do I want to discourage you.
But if you look around it\u2019s true, you do have a very complicated setup. And Kapitan can help you streamline it for you. In fact, Kapitan can leave you with a consistent and uniform way to manage all these concerns at once.
My job here is done: you have awakened and you won't look at your setup in the same way. Keep tuned and learn about how Kapitan can change the way you do things.
The Kapicorp team is happy to to announce a new release of Kapitan.
This release is yet another great bundle of features and improvements over the past year, the majority of which have been contributions from our community!
Head over our release page on GitHub for a full list of features and contributors.
If you missed it, have a look at our latest blog post here 5 years of Kapitan
Please help us by visiting our Sponsor Kapitan page.
The Kapicorp team is happy to to announce a new release of Kapitan.
This release contains loads of improvements for the past 6 months, the majority of which have been contributions from our community!
Head over our release page on GitHub for a full list of features and contributors.
Please help us by visiting our Sponsor Kapitan page.
"},{"location":"pages/blog/27/08/2023/kapitan-logo-deploying-keda-with-kapitan/","title":"Deploying Keda with Kapitan","text":"
We have worked hard to bring out a brand new way of experience Kapitan, through something that we call generators
Although the concept is something we've introduced in 2020 with our blog post Keep your ship together with Kapitan, the sheer amount of new capabilities (and frankly, the embarassing lack of documentation and examples) forces me to show you the new capabilities using a practicle example: deploying Keda.
"},{"location":"pages/blog/27/08/2023/kapitan-logo-deploying-keda-with-kapitan/#objective-of-this-tutorial","title":"Objective of this tutorial","text":"
We are going to deploy Keda using the helm chart approach. While Kapitan supports a native way to deploy helm charts using the helm input type, we are going instead to use a generator based approach using the \"charts\" generator.
This tutorial will show you how to configure kapitan to:
download a helm chart
compile a helm chart
modify a helm chart using mutations
The content of this tutorial is already available on the kapitan-reference
## inventory/classes/components/keda.yml\nparameters:\n keda:\n params:\n # Variables to reference from other places\n application_version: 2.11.2\n service_account_name: keda-operator\n chart_name: keda\n chart_version: 2.11.2\n chart_dir: system/sources/charts/${keda:params:chart_name}/${keda:params:chart_name}/${keda:params:chart_version}/${keda:params:application_version}\n namespace: keda\n helm_values: {}\n...\n
Override Helm Values
As an example we could be passing to helm an override to the default values parameters to make the operator deploy 2 replicas.
helm_values:\n operator:\n replicaCount: 2\n
"},{"location":"pages/blog/27/08/2023/kapitan-logo-deploying-keda-with-kapitan/#download-the-chart","title":"Download the chart","text":"
Kapitan supports downloading dependencies, including helm charts.
When Kapitan is run with the --fetch, it will download the dependency if not already present. Use --force-fetch if you want to download it every time. Learn more about External dependencies
## inventory/classes/components/keda.yml\n...\n kapitan:\n dependencies:\n # Tells kapitan to download the helm chart into the chart_dir directory\n - type: helm\n output_path: ${keda:params:chart_dir}\n source: https://kedacore.github.io/charts\n version: ${keda:params:chart_version}\n chart_name: ${keda:params:chart_name}\n...\n
Parameter interpolation
Notice how we are using parameter interpolation from the previously defined keda.params section. This will make it easier in the future to override some aspects of the configuration on a per-target base.
"},{"location":"pages/blog/27/08/2023/kapitan-logo-deploying-keda-with-kapitan/#generate-the-chart","title":"Generate the chart","text":"
## inventory/classes/components/keda.yml\n...\n charts:\n # Configures a helm generator to compile files for the given chart\n keda:\n chart_dir: ${keda:params:chart_dir}\n helm_params:\n namespace: ${keda:params:namespace}\n name: ${keda:params:chart_name}\n helm_values: ${keda:params:helm_values}\n
Now when we run kapitan compile we will see the chart being donwloaded and the manifests being produced.
./kapitan compile -t keda --fetch\nDependency keda: saved to system/sources/charts/keda/keda/2.11.2/2.11.2\nRendered inventory (1.87s)\nCompiled keda (2.09s)\n
kapitan compile breakdown
--fetch tells kapitan to fetch the chart if it is not found locally
-t keda tells kapitan to compile only the previously defined keda.yml target
ls -l compiled/keda/manifests/\ntotal 660\n-rw-r--r-- 1 ademaria root 659081 Aug 29 10:25 keda-bundle.yml\n-rw-r--r-- 1 ademaria root 79 Aug 29 10:25 keda-namespace.yml\n-rw-r--r-- 1 ademaria root 7092 Aug 29 10:25 keda-rbac.yml\n-rw-r--r-- 1 ademaria root 1783 Aug 29 10:25 keda-service.yml\n
Now let's do a couple of things that would not be easy to do with helm natively.
You can already notice that the content of the chart is being splitted into multiple files: this is because the Generator is configured to separate different resources types into different files for convenience and consistency. The mechanism behing it is the \"Mutation\" of type \"bundle\" which tells Kapitan which file to save a resource into.
Here are some example \"mutation\" which separates different kinds into different files
Currently most of the keda related resources are bundled into the -bundle.yml file Instead, we want to separate them into their own file.
Let's add this configuration:
charts:\n # Configures a helm generator to compile files for the given chart\n keda:\n chart_dir: ${keda:params:chart_dir}\n ...\n mutations:\n bundle:\n - conditions:\n # CRDs need to be setup separately\n kind: [CustomResourceDefinition]\n filename: '{content.component_name}-crds'\n
Upon compile, you can now see that the CRD are being moved to a different file:
ls -l compiled/keda/manifests/\ntotal 664\n-rw-r--r-- 1 ademaria root 11405 Aug 29 10:56 keda-bundle.yml\n-rw-r--r-- 1 ademaria root 647672 Aug 29 10:56 keda-crds.yml\n-rw-r--r-- 1 ademaria root 79 Aug 29 10:56 keda-namespace.yml\n-rw-r--r-- 1 ademaria root 7092 Aug 29 10:56 keda-rbac.yml\n-rw-r--r-- 1 ademaria root 1783 Aug 29 10:56 keda-service.yml\n
With this tutorial have explored some capabilities of Kapitan to manage and perform changes to helm charts. Next tutorial will show how to make use of Keda and deploy a generator for Keda resources
"},{"location":"pages/commands/kapitan_compile/#fetch-on-compile","title":"Fetch on compile","text":"
Use the --fetch flag to fetch Remote Inventories and the External Dependencies.
kapitan compile --fetch\n
This will download the dependencies according to their configurations By default, kapitan does not overwrite an existing item with the same name as that of the fetched inventory items.
Use the --force-fetch flag to force fetch (update cache with freshly fetched items) and overwrite inventory items of the same name in the output_path.
kapitan compile --force-fetch\n
Use the --cache flag to cache the fetched items in the .dependency_cache directory in the root project directory.
The --embed-refs flags tells Kapitan to embed these references on compile, alongside the generated output. By doing so, compiled output is self-contained and can be revealed by Tesoro or other tools.
kapitan compile --embed-refs\n
See how the compiled output for this specific target changes to embed the actul encrypted content, (marked by ?{gpg: :embedded} to indicate it is a gpg reference) rather than just holding a reference to it (like in this case ?{gpg:targets/minikube-mysql/mysql/password:ec3d54de} which points to ).
Kapitan allows you to coveniently override defaults by specifying a local .kapitan file in the root of your repository (relative to the kapitan configuration):
This comes handy to make sure Kapitan runs consistently for your specific setup.
Info
Any Kapitan command can be overridden in the .kapitan dotfile, but here are some of the most common examples.
To enforce the Kapitan version used for compilation (for consistency and safety), you can add version to .kapitan:
version: 0.30.0\n\n...\n
This constrain can be relaxed to allow minor versions to be also accepted:
version: 0.30 # Allows any 0.30.x release to run\n\n...\n
"},{"location":"pages/commands/kapitan_dotfile/#command-line-flags","title":"Command line flags","text":"
You can also permanently define all command line flags in the .kapitan config file. For example:
...\n\ncompile:\n indent: 4\n parallelism: 8\n
would be equivalent to running:
kapitan compile --indent 4 --parallelism 8\n
For flags which are shared by multiple commands, you can either selectively define them for single commmands in a section with the same name as the command, or you can set any flags in section global, in which case they're applied for all commands. If you set a flag in both the global section and a command's section, the value from the command's section takes precedence over the value from the global section.
As an example, you can configure the inventory-path in the global section of the Kapitan dotfile to make sure it's persisted across all Kapitan runs.
...\n\nglobal:\n inventory-path: ./some_path\n
which would be equivalent to running any command with --inventory-path=./some_path.
Another flag that you may want to set in the global section is inventory-backend to select a non-default inventory backend implementation.
global:\n inventory-backend: reclass\n
which would be equivalent to always running Kapitan with --inventory-backend=reclass.
Please note that the inventory-backend flag currently can't be set through the command-specific sections of the Kapitan config file.
Running yamllint on all inventory files...\n\n.yamllint not found. Using default values\nFile ./inventory/classes/components/echo-server.yml has the following issues:\n 95:29: forbidden implicit octal value \"0550\" (octal-values)\nFile ./inventory/classes/terraform/gcp/services.yml has the following issues:\n 15:11: duplication of key \"enable_compute_service\" in mapping (key-duplicates)\n\nTotal yamllint issues found: 2\n\nChecking for orphan classes in inventory...\n\nNo usage found for the following 6 classes:\n{'components.argoproj.cd.argocd-server-oidc',\n'components.helm.cert-manager-helm',\n'components.rabbitmq-operator.rabbitmq-configuration',\n'components.rabbitmq-operator.rabbitmq-operator',\n'features.gkms-demo',\n'projects.localhost.kubernetes.katacoda'}\n
Validates the schema of compiled output. Validate options are specified in the inventory under parameters.kapitan.validate. Supported types are:
"},{"location":"pages/commands/kapitan_validate/#usage","title":"Usage","text":"standalonemanual with kapitan compileautomatic with .kapitan dotfile
kapitan validate\n
click to expand output
created schema-cache-path at ./schemas\nValidation: manifest validation successful for ./compiled/minikube-mysql/manifests/mysql_secret.yml\nValidation: manifest validation successful for ./compiled/minikube-mysql/manifests/mysql_service_jsonnet.yml\nValidation: manifest validation successful for ./compiled/minikube-mysql/manifests/mysql_service_simple.yml\n
Kubernetes has different resource kinds, for instance:
service
deployment
statefulset
Kapitan has built in support for validation of Kubernetes kinds, and automatically integrates with https://kubernetesjsonschema.dev. See github.com/instrumenta/kubernetes-json-schema for more informations.
Info
Kapitan will automatically download the schemas for Kubernetes Manifests directly from https://kubernetesjsonschema.dev
By default, the schemas are cached into ./schemas/, which can be modified with the --schemas-path option.
override permanently schema-path
Remember to use the .kapitan dotfile configuration to override permanently the schema-path location.
$ cat .kapitan\n# other options abbreviated for clarity\nvalidate:\n schemas-path: custom/schemas/cache/path\n
Many of our features come from contributions from external collaborators. Please help us improve Kapitan by extending it with your ideas, or help us squash bugs you discover.
It's simple, just send us a PR with your improvements!
We would like ask you to fork Kapitan project and create a Pull Request targeting master branch. All submissions, including submissions by project members, require review.
poetry install --all-extras --with dev --with test --with docs\n
Poetry creates a virtual environment with the required dependencies installed.
Run kapitan with the own compiled code
poetry run kapitan <your command>\n
Because we are using a pinned version of reclass which is added as a submodule into Kapitan's repository, you need to pull it separately by executing the command below:
Run make test to run all tests. If you modify anything in the examples/ folder make sure you replicate the compiled result of that in tests/test_kubernetes_compiled. If you add new features, run make test_coverage && make test_formatting to make sure the test coverage remains at current or better levels and that code formatting is applied.
If you would like to evaluate your changes by running your version of Kapitan, you can do that by running bin/kapitan from this repository or even setting an alias to it.
To make sure you adhere to the Style Guide for Python (PEP8) Python Black is used to apply the formatting so make sure you have it installed with pip3 install black.
","tags":["community"]},{"location":"pages/contribute/code/#apply-via-git-hook","title":"Apply via Git hook","text":"
Run pip3 install pre-commit to install precommit framework.
In the Kapitan root directory, run pre-commit install
Create a branch named release-v<NUMBER>. Use v0.*.*-rc.* if you want pre-release versions to be uploaded.
Update CHANGELOG.md with the release changes.
Once reviewed and merged, Github Actions will auto-release.
The merge has to happen with a merge commit not with squash/rebase so that the commit message still mentions kapicorp/release-v* inside.
","tags":["community"]},{"location":"pages/contribute/code/#packaging-extra-resources-in-python-package","title":"Packaging extra resources in python package","text":"
To package any extra resources/files in the pip package, make sure you modify both MANIFEST.in.
","tags":["community"]},{"location":"pages/contribute/code/#leave-a-comment","title":"Leave a comment","text":"","tags":["community"]},{"location":"pages/contribute/documentation/","title":"Documentation","text":"
Our documentation usully prevents new users from adopting Kapitan. Help us improve by contributing with fixes and keeping it up-to-date.
Find something odd? Let us know or change it yourself: you can edit pages of this website on Github by clicking the pencil icon at the top right of this page!
We use mkdocs to generate our gh-pages from .md files under docs/ folder.
Updating our gh-pages is therefore a two-step process.
","tags":["community"]},{"location":"pages/contribute/documentation/#update-the-markdown","title":"Update the markdown","text":"
Submit a PR for our master branch that updates the .md file(s). Test how the changes would look like when deployed to gh-pages by serving it on localhost:
Edit the strict property in mkdocs.yml and set it to false.
make local_serve_documentation
Now the documentation site should be available at localhost:8000.
","tags":["community"]},{"location":"pages/contribute/documentation/#submit-a-pr","title":"Submit a PR","text":"
Once the above PR has been merged, our CI will deploy your docs automatically.
This input type simply copies the input templates to the output directory without any rendering/processing. For Copy, input_paths can be either a file or a directory: in case of a directory, all the templates in the directory will be copied and outputted to output_path.
Supported output types: N/A (no need to specify output_type)
Example
kapitan:\n compile:\n - input_type: copy\n ignore_missing: true # Do not error if path is missing. Defaults to False\n input_paths:\n - resources/state/${target_name}/.terraform.lock.hcl\n output_path: terraform/\n
This input type executes an external script or binary. This can be used to manipulate already compiled files or execute binaries outside of kapitan that generate or manipulate files.
For example, ytt is a useful yaml templating tool. It is not built into the kapitan binary, however, with the external input type, we could specify the ytt binary to be executed with specific arguments and environment variables.
In this example, we're removing a label from a k8s manifests in a directory ingresses and placing it into the compiled target directory.
Supported output types: N/A (no need to specify output_type)
Additionally, the input type supports field env_vars, which can be used to set environment variables for the external command. By default, the external command doesn't inherit any environment variables from Kapitan's environment. However, if environment variables $PATH or $HOME aren't set in env_vars, they will be propagated from Kapitan's environment to the external command's environment.
Finally, Kapitan will substitute ${compiled_target_dir} in both the command's arguments and the environment variables. This variable needs to be escaped in the configuration to ensure that reclass won't interpret it as a reclass reference.
"},{"location":"pages/input_types/helm/","title":"Input Type | Helm","text":"
This is a Python binding to helm template command for users with helm charts. This does not require the helm executable, and the templates are rendered without the Tiller server.
Unlike other input types, Helm input types support the following additional parameters under kapitan.compile:
helm_values is an object containing values specified that will override the default values in the input chart. This has exactly the same effect as specifying --values custom_values.yml for helm template command where custom_values.yml structure mirrors that of helm_values.
helm_values_files is an array containing the paths to helm values files used as input for the chart. This has exactly the same effect as specifying --file my_custom_values.yml for the helm template command where my_custom_values.yml is a helm values file. If the same keys exist in helm_values and in multiple specified helm_values_files, the last indexed file in the helm_values_files will take precedence followed by the preceding helm_values_files and at the bottom the helm_values defined in teh compile block. There is an example in the tests. The monitoring-dev(kapitan/tests/test_resources/inventory/targets/monitoring-dev.yml) and monitoring-prd(kapitan/tests/test_resources/inventory/targets/monitoring-prd.yml) targets both use the monitoring(tests/test_resources/inventory/classes/component/monitoring.yml) component. This component has helm chart input and takes a common.yml helm_values file which is \"shared\" by any target that uses the component and it also takes a dynamically defined file based on a kapitan variable defined in the target.
helm_path can be use to provide the helm binary name or path. helm_path defaults to the value of KAPITAN_HELM_PATH env var if it is set, else it defaults to helm
helm_params correspond to the flags for helm template. Most flags that helm supports can be used here by replacing '-' by '_' in the flag name.
Flags without argument must have a boolean value, all other flags require a string value.
Special flags:
name: equivalent of helm template [NAME] parameter. Ignored if name_template is also specified. If neither name_template nor name are specified, the --generate-name flag is used to generate a name.
output_file: name of the single file used to output all the generated resources. This is equivalent to call helm template without specifing output dir. If not specified, each resource is generated into a distinct file.
include_crds and skip_tests: These flags are enabled by default and should be set to false to be removed.
debug: prints the helm debug output in kapitan debug log.
namespace: note that due to the restriction on helm template command, specifying the namespace does not automatically add metadata.namespace property to the resources. Therefore, users are encouraged to explicitly specify it in all resources:
metadata:\n namespace: {{ .Release.Namespace }} # or any other custom values\n
Let's use nginx-ingress helm chart as the input. Using kapitan dependency manager, this chart can be fetched via a URL as listed in https://helm.nginx.com/stable/index.yaml.
On a side note, https://helm.nginx.com/stable/ is the chart repository URL which you would helm repo add, and this repository should contain index.yaml that lists out all the available charts and their URLs. By locating this index.yaml file, you can locate all the charts available in the repository.
We can use version 0.3.3 found at https://helm.nginx.com/stable/nginx-ingress-0.3.3.tgz. We can create a simple target file as inventory/targets/nginx-from-chart.yml whose content is as follows:
The chart is fetched before compile, which creates components/charts/nginx-ingress folder that is used as the input_paths for the helm input type. To confirm if the helm_values actually has overridden the default values, we can try:
Step Flag Description Configuration Inventory Kapitan uses reclass to render a final version of the inventory. Fetch --fetch Kapitan fetches external dependencies parameters.kapitan.dependencies Compile Kapitan compiles the input types for each target parameters.kapitan.compile Reveal --reveal Kapitan reveals the secrets directly in the compiled output parameters.kapitan.secrets Copy Kapitan moves the output files from the tmp directory to /compiled Validate --validate Kapitan validates the schema of compiled output. parameters.kapitan.validate Finish Kapitan has completed all tasks"},{"location":"pages/input_types/introduction/#supported-input-types","title":"Supported input types","text":"
Input types can be specified in the inventory under kapitan.compile in the following format:
We define a list with all the templates we want to compile with this input type
Then input type will render the files a the root of the target compiled folder e.g. compiled/${target_name}
We pass the list as input_paths
Notice how make use of variable interpolation to use the convenience of a list to add all the files we want to compile. You can now simply add to that list from any other place in the inventory that calls that class.
input_paths can either be a file, or a directory: in case of a directory, all the templates in the directory will be rendered.
input_params (optional) can be used to pass extra parameters, helpful when needing to use a similar template for multiple components in the same target.
We usually store documentation templates under the templates/docs directory.
examples/kubernetes/docs/nginx/README.md
{% set i = inventory.parameters %}\n\n# Welcome to the README!\n\nTarget *{{ i.target_name }}* is running:\n\n* {{ i.nginx.replicas }} replicas of *nginx* running nginx image {{ i.nginx.image }}\n* on cluster {{ i.cluster.name }}\n
Compiled result
# Welcome to the README!\n\nTarget *minikube-nginx-jsonnet* is running:\n\n* 1 replicas of *nginx* running nginx image nginx:1:15.8\n* on cluster minikube\n
When we use Jinja to render scripts, we tend to call them \"canned scripts\" to indicate that these scripts have everything needed to run without extra parameters.
We usually store script templates under the templates/scripts directory.
examples/kubernetes/components/nginx-deploy.sh
#!/bin/bash -e\nDIR=$(dirname ${BASH_SOURCE[0]})\n{% set i = inventory.parameters %} #(1)!\n\nKUBECTL=\"kubectl -n {{i.namespace}}\" #(2)!\n\n# Create namespace before anything else\n${KUBECTL} apply -f ${DIR}/pre-deploy/namespace.yml\n\nfor SECTION in manifests\ndo\n echo \"## run kubectl apply for ${SECTION}\"\n ${KUBECTL} apply -f ${DIR}/${SECTION}/ | column -t\ndone\n
We import the inventory as a Jinja variable
We use to set the namespace explicitly
Compiled result
#!/bin/bash -e\nDIR=$(dirname ${BASH_SOURCE[0]})\n #(1)!\n\nKUBECTL=\"kubectl -n minikube-nginx-jsonnet\" #(2)!\n\n# Create namespace before anything else\n${KUBECTL} apply -f ${DIR}/pre-deploy/namespace.yml\n\nfor SECTION in manifests\ndo\n echo \"## run kubectl apply for ${SECTION}\"\n ${KUBECTL} apply -f ${DIR}/${SECTION}/ | column -t\ndone\n
The script is now a \"canned script\" and ready to be used for this specif target.
You can see that the namespace has been replaced with the target's one.
"},{"location":"pages/input_types/jinja/#accessing-the-inventory","title":"Accessing the inventory","text":"
Templates will be provided at runtime with 3 variables:
inventory: To access the inventory for that specific target.
inventory_global: To access the inventory of all targets.
input_params: To access the optional dictionary provided to the input type.
Use of inventory_global
inventory_global can be used to generate a \"global\" README.md that contains a link to all generated targets.
| *Target* |\n|------------------------------------------------------------------------|\n{% for target in inventory_global | sort() %}\n{% set p = inventory_global[target].parameters %}\n|[{{target}}](../{{target}}/docs/README.md) |\n{% endfor %}\n
{{ hello world | regex_replace(pattern=\"world\", replacement=\"kapitan\")}}
escape all regular expressions special characters from string
{{ \"+s[a-z].*\" | regex_escape}}
perform re.search and return the list of matches or a backref
{{ hello world | regex_search(\"world.*\")}}
perform re.findall and return the list of matches as array
{{ hello world | regex_findall(\"world.*\")}}
return list of matched regular files for glob
{{ ./path/file* | fileglob }}
return the bool for value
{{ yes | bool }}
value ? true_val : false_val
{{ condition | ternary(\"yes\", \"no\")}}
randomly shuffle elements of a list
{{ [1, 2, 3, 4, 5] | shuffle }}
reveal ref/secret tag only if compile --reveal flag is set
{{ \"?{base64:my_ref}\" | reveal_maybe}}
Tip
You can also provide path to your custom filter modules in CLI. By default, you can put your filters in lib/jinja2_filters.py and they will automatically get loaded.
"},{"location":"pages/input_types/jsonnet/","title":"Input Type | Jsonnet","text":"
Jsonnet is a superset of json format that includes features such as conditionals, variables and imports. Refer to jsonnet docs to understand how it works.
Note: unlike jinja2 templates, one jsonnet template can output multiple files (one per object declared in the file).
"},{"location":"pages/input_types/jsonnet/#accessing-the-inventory","title":"Accessing the inventory","text":"
Typical jsonnet files would start as follows:
local kap = import \"lib/kapitan.libjsonnet\"; #(1)!\nlocal inv = kap.inventory(); #(2)!\nlocal p = inv.parameters; #(3)!\n\n{\n \"data_java_opts\": p.elasticsearch.roles.data.java_opts, #(4)!\n}\n
Import the Kapitan inventory library.
Assign the content of the full inventory for this specific target to the inv variable.
Assign the content of the inventory.parameters to a variable p for convenience.
Use the p variable fo access a specific intentory value
Note: The dictionary keys of the jsonnet object are used as filenames for the generated output files. If your jsonnet is not a dictionary, but is a valid json(net) object, then the output filename will be the same as the input filename. E.g. 'my_string' is inside templates/input_file.jsonnet so the generated output file will be named input_file.json for example and will contain \"my_string\".
If validation.valid is not true, it will then fail compilation and display validation.reason.
Fails validation because storage has an invalid pattern (10Z)
Jsonnet error: failed to compile /code/components/mysql/main.jsonnet:\nRUNTIME ERROR: '10Z' does not match '^[0-9]+[MGT]{1}$'\n\nFailed validating 'pattern' in schema['properties']['storage']:\n {'pattern': '^[0-9]+[MGT]{1}$', 'type': 'string'}\n\nOn instance['storage']:\n '10Z'\n\n/code/mysql/main.jsonnet:(19:1)-(43:2)\n\nCompile error: failed to compile target: minikube-mysql\n
"},{"location":"pages/input_types/kadet/","title":"Input Type | Kadet","text":"
Kadet is an extensible input type for Kapitan that enables you to generate templates using Python.
The key benefit being the ability to utilize familiar programing principles while having access to Kapitan's powerful inventory system.
A library that defines resources as classes using the Base Object class is required. These can then be utilized within components to render output.
The following functions are provided by the class BaseObj().
Method definitions:
new(): Provides parameter checking capabilities
body(): Enables in-depth parameter configuration
Method functions:
root(): Defines values that will be compiled into the output
need(): Ability to check & define input parameters
update_root(): Updates the template file associated with the class
A class can be a resource such as a Kubernetes Deployment as shown here:
The deployment is an BaseObj() which has two main functions.
new(self) is used to perform parameter validation & template compilation
body(self) is utilized to set those parameters to be rendered.
self.root.metadata.name is a direct reference to a key in the corresponding yaml.
Kadet supports importing libraries as you would normally do with Python. These libraries can then be used by the components to generate the required output.
We import a library called kubelib using load_from_search_paths()
We use kubelib to create a Container
We create an output of type BaseObj and we will be updating the root element of this output.
We use kubelib to create a Deployment kind. The Deployment makes use of the Container created.
We use kubelib to create a Service kind.
We return the object. Kapitan will render everything under output.root
Kadet uses a library called addict to organise the parameters inline with the yaml templates. As shown above we create a BaseObject() named output. We update the root of this output with the data structure returned from kubelib. This output is what is then returned to kapitan to be compiled into the desired output type.
For a deeper understanding please refer to github.com/kapicorp/kadet
This input type simply removes files or directories. This can be helpful if you can't control particular files generated during other compile inputs.
For example, to remove a file named copy_target, specify an entry to input_paths, compiled/${kapitan:vars:target}/copy_target.
parameters:\n target_name: removal\n kapitan:\n vars:\n target: ${target_name}\n compile:\n - input_type: copy\n input_paths:\n - copy_target\n output_path: .\n # test removal of a file\n - input_type: remove\n input_paths:\n - compiled/${kapitan:vars:target}/copy_target\n output_path: .\n
As a reminder, each input block within the compile array is run sequentially for a target in Kapitan. If we reversed the order of the inputs above like so:
The next thing you want to learn about the inventory are classes. A class is a yaml file containing a fragment of yaml that we want to import and merge into the inventory.
Classes are fragments of yaml: feature sets, commonalities between targets. Classes let you compose your Inventory from smaller bits, eliminating duplication and exposing all important parameters from a single, logically organised place. As the Inventory lets you reference other parameters in the hierarchy, classes become places where you can define something that will then get referenced from another section of the inventory, allowing for composition.
Classes are organised under the inventory/classes directory substructure. They are organised hierarchically in subfolders, and the way they can be imported into a target or other classes depends on their location relative to the inventory/classes directory.
Notice that this class includes an import definition for another class, kapitan.common. We've already learned this means that kapitan will import a file on disk called inventory/classes/kapitan/common.yml
You can also see that in the parameters section we now encounter a new syntax which unlocks another powerful inventory feature: parameters interpolation!
"},{"location":"pages/inventory/introduction/","title":"What is the inventory?","text":"
The Inventory is a core component of Kapitan: this section aims to explain how it works and how to best take advantage of it.
The Inventory is a hierarchical YAML based structure which you use to capture anything that you want to make available to Kapitan, so that it can be passed on to its templating engines.
Classes are found by default under the inventory/classes directory and define common settings and data that you define once and can be included in other files. This promotes consistency and reduces duplication.
Classes are identified with a name that maps to the directory structure they are nested under. In this example, the kapicorp.common class represented by the file classes/kapicorp/common.yml
Targets are found by default under the inventory/targets directory and represent the different environments or components you want to manage. Each target is a YAML file that defines a set of configurations.
For example, you might have targets for production, staging, and development environments.
By combining target and classes, the Inventory becomes the SSOT for your whole configuration, and learning how to use it will unleash the real power of Kapitan.
namespace should take the same value defined in target_name
target_name should take the literal string dev
application.location should take the same value as defined in cluster.location
It is important to notice that the inventory can refer to values defined in other classes, as long as they are imported by the target. So for instance with the following example
Here in this case application.location refers to a value location which has been defined elsewhere, perhaps (but not necessarily) in the project.production class.
Also notice that the class name (project.production) is not in any ways influencing the name or the structed of the yaml it imports into the file
Reclass-rs is a reimplementation of Kapitan's Reclass fork in Rust. Please note that the Rust implementation doesn't support all the features of Kapitan's Reclass fork yet.
However, reclass-rs improves rendering time for the inventory significantly, especially if you're making heavy use of parameter references in class includes. If some of the Reclass features or options that you're using are missing in reclass-rs, don't hesitate to open an issue in the reclass-rs project.
To use the reclass-rs inventory backend, you need to pass --inventory-backend=reclass-rs on the command line. If you want to permanently switch to the reclass-rs inventory backend, you can select the inventory backend in the .kapitan config file:
For the performance comparison, a real Kapitan inventory which makes heavy use of parameter interpolation in class includes was rendered with both Reclass and reclass-rs. The example inventory that was used for the performance comparison contains 325 classes and 56 targets. The example inventory renders to a total of 25MB of YAML.
$ time kapitan inventory -v --inventory-backend=reclass > inv.yml\n[ ... some output omitted ... ]\nkapitan.resources DEBUG Using reclass as inventory backend\nkapitan.inventory.backends.reclass DEBUG Inventory reclass: No config file found. Using reclass inventory config defaults\nkapitan.inventory.backends.reclass DEBUG Inventory rendering with reclass took 0:01:06.037057\n\nreal 1m23.840s\nuser 1m23.520s\nsys 0m0.287s\n
Reclass takes 1 minute and 6 seconds to render the example inventory. The rest of the runtime (roughly 18 seconds) is spent in writing the resulting 25MB of YAML to the output file.
$ time kapitan inventory -v --inventory-backend=reclass-rs > inv-rs.yml\n[ ... some output omitted ... ]\nkapitan.resources DEBUG Using reclass-rs as inventory backend\nkapitan.inventory.backends.reclass DEBUG Inventory reclass: No config file found. Using reclass inventory config defaults\nreclass-config.yml entry 'storage_type=yaml_fs' not implemented yet, ignoring...\nreclass-config.yml entry 'inventory_base_uri=./inventory' not implemented yet, ignoring...\nreclass-config.yml entry 'allow_none_override=true' not implemented yet, ignoring...\nkapitan.inventory.backends.reclass_rs DEBUG Inventory rendering with reclass-rs took 0:00:01.717107\n\nreal 0m19.921s\nuser 0m35.586s\nsys 0m1.066s\n
reclass-rs takes 1.7 seconds to render the example inventory. The rest of the runtime (roughly 18 seconds) is spent in writing the resulting 25MB of YAML to the output file.
Targets are defined as YAML files within the inventory/targets/ directory of your Kapitan project. Each target file typically includes:
classes:\n # A list of classes to inherit configurations from.\n # This allows you to reuse common settings and avoid repetition\n -\n\nparameters:\n # file parameters that override or extend the parameters inherited from previously loaded classes\n
Reads the target file: Kapitan parses the YAML file for the specified target.
Merges configurations: It merges the parameters from the included classes with the target-specific parameters, giving priority to the target's values.
Generates output in compiled/target/path/targetname: It uses this merged configuration data, along with the input types and generators, to create the final configuration files for the target environment.
When you run kapitan without the selector, it will run compile for all targets it discovers under the inventory/targets subdirectory.
Targets are not limited to living directly within the inventory/targets directory.
They can be organized into subdirectories to create a more structured and hierarchical inventory. This is particularly useful for managing large and complex projects.
When targets are organized in subdirectories, Kapitan uses the full path from the targets/ directory to create the target name. This name is then used to identify the target during compilation and in the generated output.
In this example, the my-cluster.yml target file is located within the clusters/production/ subdirectory, and can be identified with clusters.production.my-cluster.