diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/404.html b/404.html new file mode 100644 index 000000000..8b700358d --- /dev/null +++ b/404.html @@ -0,0 +1,746 @@ + + + +
+ + + + + + + + + + + + + + + + + +The build backend is a software or service responsible for actually building the images. DIB itself is not capable of +building images, it delegates this part to the build backend.
+DIB supports multiple build backends. Currently, available backends are docker
and kaniko
. You can select the
+backend to use with the --backend
option.
Executor compatibility matrix
+Backend | +Local | +Docker | +Kubernetes | +
---|---|---|---|
Docker | +✔ | +✗ | +✗ | +
Kaniko | +✗ | +✔ | +✔ | +
The docker
backend uses Docker behind the scenes, and runs docker build
You need to have
+the Docker CLI installed locally to use this backend.
Authentication
+The Docker Daemon requires authentication to pull and push images from private registries. Run the
+docker login
command to authenticate.
Authentication settings are stored in a config.json
file located by default in $HOME/.docker/
.
+If you need to provide a different configuration, you can set the DOCKER_CONFIG
variable to the path to another
+directory, which should contain a config.json
file.
Remote Daemon
+If you want to set a custom docker daemon host, you can set the DOCKER_HOST
environment variable. The builds will then
+run on the remote host instead of using the local Docker daemon.
BuildKit
+If available, DIB will try to use the BuildKit engine to build images, which is faster than the default Docker +build engine.
+Kaniko offers a way to build container images inside a container +or Kubernetes cluster, without the security tradeoff of running a docker daemon container with host privileges.
+BuildKit
+As Kaniko must run in a container, it requires Docker when running local builds as it uses the docker
executor.
See the kaniko
section in the configuration reference.
As DIB only rebuilds images when something changes in the build context (including the Dockerfile), external +dependencies should always be pinned to a specific version, so upgrading the dependency triggers a rebuild.
+Example: +
RUN apt-get install package@1.0.0
+
The .dockerignore
lists file patterns that should not be included in the build context. DIB also ignores those files
+when it computes the checksum, so no rebuild is triggered when they are modified.
An Opinionated Docker Image Builder
+Docker Image Builder helps building a complex image dependency graph
+Run dib --help for more information
+ --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -h, --help help for dib
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
Run docker images builds
+dib build will compute the graph of images, and compare it to the last built state
+For each image, if any file part of its docker context has changed, the image will be rebuilt. +Otherwise, dib will create a new tag based on the previous tag
+dib build [flags]
+
-b, --backend string Build Backend used to run image builds. Supported backends: [docker kaniko] (default "docker")
+ --dry-run Simulate what would happen without actually doing anything dangerous.
+ --force-rebuild Forces rebuilding the entire image graph, without regarding if the target version already exists.
+ -h, --help help for build
+ --include-tests strings List of test runners to exclude during the test phase.
+ --local-only Build docker images locally, do not push on remote registry
+ --no-graph Disable generation of graph during the build process.
+ --no-retag Disable re-tagging images after build. Note that temporary tags with the "dev-" prefix may still be pushed to the registry.
+ --no-tests Disable execution of tests (unit tests, scans, etc...) after the build.
+ --rate-limit int Concurrent number of builds that can run simultaneously (default 1)
+ --release dib.extra-tags Enable release mode to tag all images with extra tags found in the dib.extra-tags Dockerfile labels.
+ --reports-dir string Path to the directory where the reports are generated. (default "reports")
+
--build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
Generate the autocompletion script for the specified shell
+Generate the autocompletion script for dib for the specified shell. +See each sub-command's help for details on how to use the generated script.
+ -h, --help help for completion
+
--build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
Generate the autocompletion script for bash
+Generate the autocompletion script for the bash shell.
+This script depends on the 'bash-completion' package. +If it is not installed already, you can install it via your OS's package manager.
+To load completions in your current shell session:
+source <(dib completion bash)
+
To load completions for every new session, execute once:
+dib completion bash > /etc/bash_completion.d/dib
+
dib completion bash > $(brew --prefix)/etc/bash_completion.d/dib
+
You will need to start a new shell for this setup to take effect.
+dib completion bash
+
-h, --help help for bash
+ --no-descriptions disable completion descriptions
+
--build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
Generate the autocompletion script for fish
+Generate the autocompletion script for the fish shell.
+To load completions in your current shell session:
+dib completion fish | source
+
To load completions for every new session, execute once:
+dib completion fish > ~/.config/fish/completions/dib.fish
+
You will need to start a new shell for this setup to take effect.
+dib completion fish [flags]
+
-h, --help help for fish
+ --no-descriptions disable completion descriptions
+
--build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
Generate the autocompletion script for powershell
+Generate the autocompletion script for powershell.
+To load completions in your current shell session:
+dib completion powershell | Out-String | Invoke-Expression
+
To load completions for every new session, add the output of the above command +to your powershell profile.
+dib completion powershell [flags]
+
-h, --help help for powershell
+ --no-descriptions disable completion descriptions
+
--build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
Generate the autocompletion script for zsh
+Generate the autocompletion script for the zsh shell.
+If shell completion is not already enabled in your environment you will need +to enable it. You can execute the following once:
+echo "autoload -U compinit; compinit" >> ~/.zshrc
+
To load completions in your current shell session:
+source <(dib completion zsh)
+
To load completions for every new session, execute once:
+dib completion zsh > "${fpath[1]}/_dib"
+
dib completion zsh > $(brew --prefix)/share/zsh/site-functions/_dib
+
You will need to start a new shell for this setup to take effect.
+dib completion zsh [flags]
+
-h, --help help for zsh
+ --no-descriptions disable completion descriptions
+
--build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
Print list of images managed by DIB
+dib list will print a list of all Docker images managed by DIB
+dib list [flags]
+
-h, --help help for list
+ -o, --output string Output format (console|go-template-file)
+ You can provide a custom format using go-template: like this: "-o go-template-file=...".
+
--build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
print current dib version
+dib version [flags]
+
-h, --help help for version
+
--build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively
+ found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images,
+ as long as it has at least one Dockerfile in it. (default "docker")
+ --config string config file (default is $HOME/.config/.dib.yaml)
+ --hash-list-file-path string Path to custom hash list file that will be used to humanize hash
+ -l, --log-level string Log level. Can be any level supported by logrus ("info", "debug", etc...) (default "info")
+ --placeholder-tag string Tag used as placeholder in Dockerfile "from" statements, and replaced internally by dib during builds
+ to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so
+ Dockerfiles are always valid (images can still be built even without using dib). (default "latest")
+ --registry-url string Docker registry URL where images are stored. (default "eu.gcr.io/my-test-repository")
+
---
+# Log level: "trace", "debug", "info", "warning", "error", "fatal", "panic". Defaults to "info".
+log_level: info
+
+# URL of the registry where the images should be stored.
+#
+# DIB will use the local docker configuration to fetch metadata about existing images. You may use the DOCKER_CONFIG
+# environment variable to set a custom docker config path.
+# See the official Docker documentation (https://docs.docker.com/engine/reference/commandline/cli/#configuration-files).
+# The build backend must also be authenticated to have permission to push images.
+registry_url: registry.example.org
+
+# The placeholder tag DIB uses to mark which images are the reference. Defaults to "latest".
+# Change this value if you don't want to use "latest" tags, or if images may be tagged "latest" by other sources.
+placeholder_tag: latest
+
+# The rate limit can be increased to allow parallel builds. This dramatically reduces the build times
+# when using the Kubernetes executor as build pods are scheduled across multiple nodes.
+rate_limit: 1
+
+# Path to the directory where the reports are generated. The directory will be created if it doesn't exist.
+reports_dir: reports
+
+# The build backend. Can either be set to "docker" or "kaniko".
+#
+# Note: the kaniko backend must be run in a containerized environment such as Docker or Kubernetes.
+# See the "executor" section below.
+backend: docker
+
+# Kaniko settings. Required only if using the Kaniko build backend.
+kaniko:
+ # The build context directory has to be uploaded somewhere in order for the Kaniko pod to retrieve it,
+ # when using remote executor (Kuberentes or remote docker host). Currently, only AWS S3 is supported.
+ context:
+ # Store the build context in an AWS S3 bucket.
+ s3:
+ bucket: my-bucket
+ region: eu-west-3
+ # Executor configuration. It is only necessary to provide valid configurations for all of them,
+ # just pick one up according to your needs.
+ executor:
+ # Configuration for the "docker" executor.
+ docker:
+ image: eu.gcr.io/radio-france-k8s/kaniko:latest
+ # Configuration for the "kubernetes" executor.
+ kubernetes:
+ namespace: kaniko
+ image: eu.gcr.io/radio-france-k8s/kaniko:latest
+ # References a secret containing the Docker configuration file used to authenticate to the registry.
+ docker_config_secret: docker-config-prod
+ env_secrets:
+ # Additional Secret mounted as environment variables.
+ # Used for instance to download the build context from AWS S3.
+ - aws-s3-secret
+ container_override: |
+ resources:
+ limits:
+ cpu: 2
+ memory: 8Gi
+ requests:
+ cpu: 1
+ memory: 2Gi
+ pod_template_override: |
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kops.k8s.io/instancegroup
+ operator: In
+ values:
+ - spot-instances
+
+# Enable test suites execution after each image build.
+include_tests:
+ # Enable Goss tests. See the "goss" configuration section below.
+ # To test an image, place a goss.yml file in its build context.
+ # Learn more about Goss: https://github.com/goss-org/goss
+ - goss
+ # Enable trivy vulnerability scans. See the "trivy" configuration section below.
+ # Learn more about Trivy: https://aquasecurity.github.io/trivy
+ - trivy
+
+goss:
+ executor:
+ # Kubernetes executor configuration. Required when using the kubernetes build executor.
+ kubernetes:
+ enabled: true
+ namespace: goss
+ image: aelsabbahy/goss:latest
+ image_pull_secrets:
+ # - private-container-registry
+
+trivy:
+ executor:
+ # Kubernetes executor configuration. Required when using the kubernetes build executor.
+ kubernetes:
+ enabled: true
+ namespace: trivy
+ image: ghcr.io/aquasecurity/trivy:latest
+ # References a secret containing the Docker configuration file used to authenticate to the registry.
+ docker_config_secret: docker-config-ci
+ image_pull_secrets:
+ # - private-container-registry
+ container_override: |
+ resources:
+ limits:
+ cpu: 2
+ memory: 3Gi
+ requests:
+ cpu: 2
+ memory: 1Gi
+ env:
+ - name: GOOGLE_APPLICATION_CREDENTIALS
+ value: /credentials/gcr_service_account.json
+ - name: TRIVY_TIMEOUT
+ value: "30m0s"
+ volumeMounts:
+ - mountPath: /credentials
+ name: private-registry-credentials
+ readOnly: true
+ pod_template_override: |
+ spec:
+ volumes:
+ - name: private-registry-credentials
+ secret:
+ defaultMode: 420
+ secretName: private-registry-credentials
+
+# Easter egg: A path to a file containing a custom wordlist that will be used to
+# generate the humanized hashes for image tags. The list must contain exactly 256 words.
+# You can enable the usage of this list in each Dockerfile with a custom label :
+# LABEL dib.use-custom-hash-list="true"
+# Please keep in mind each time you change this list the images using the
+# use-custom-hash-list label may see their hashes regenerated.
+humanized_hash_list: ""
+# humanized_hash_list: "custom_wordlist.txt"
+
DIB can be configured either by command-line flags, environment variables or configuration file.
+The command-line flags have the highest priority, then environment variables, then config file. You can set some +default values in the configuration file, and then override with environment variables of command-line flags.
+Example: +
dib build --registry-url=gcr.io/project
+
DIB auto-discovers configuration from environment variables prefixed with DIB_
, followed by the capitalized,
+snake_cased flag name.
Example: +
export DIB_REGISTRY_URL=gcr.io/project
+dib build
+
DIB uses a YAML configuration file in addition to command-line arguments. It will look for a file names .dib.yaml
+in the current working directory. You can change the file location by setting the --config
(-c
) flag.
The YAML keys are equivalent to the flag names, in snake_case.
+Example: +
# .dib.yaml
+registryUrl: gcr.io/project
+...
+
You can find more examples here. See also the +reference configuration file.
+ + + + + + +The documentation is generated with mkdocs
. It generates a static website in plain HTML
+from the Markdown files present in the docs/
directory.
We also use the Cobra built-in documentation generator for DIB commands.
+Let's set up a local Python environment and run the documentation server with live-reload.
+Create a virtual env: +
python -m venv venv
+source venv/bin/activate
+
Install dependencies: +
pip install -r requirements.txt
+
Generate docs of dib commands: +
make docs
+
Run the mkdocs
server:
+
mkdocs serve
+
Go to http://localhost:8000
+DIB supports multiple build executors. An executor is a platform able to run image builds and tests. +Unlike the build backends which can be explicitely chosen, the executor is automatically selected depending on the type +of operation (build, test), and the executors configured in the configuration file.
+Build backend compatibility matrix
+Executor | +Docker | +Kaniko | +
---|---|---|
Local | +✔ | +✗ | +
Docker | +✗ | +✔ | +
Kubernetes | +✗ | +✔ | +
Runs commands using the local exec system call. Use the --local-only
flag to force the local executor.
Runs commands in a docker container, using the docker run
command.
Creates pods in a kubernetes cluster, using the kubernetes API. +DIB uses the current kube context, please make do
+See an example configuration in the +configuration reference section.
+ + + + + + +Images managed by DIB will get tagged with the human-readable version of the computed hash. This is not very convenient +in some cases, for instance if we want to tag an image with the explicit version of the contained software.
+DIB allows additional tags to be definedusing a label in the Dockerfile: +
LABEL dib.extra-tags="v1.0.0,v1.0,v1"
+
The label may contain a coma-separated list of tags to be created when the image
+gets promoted with the --release
flag.
DIB is a tool designed to help build multiple Docker images defined within a directory, possibly having dependencies +with one another, in a single command.
+Warning
+DIB is still at an early stage, development is still ongoing and new minor releases may bring some breaking changes. +This may occur until we release the v1.
+As containers have become the standard software packaging technology, we have to deal with an ever-increasing number of +image definitions. In DevOps teams especially, we need to manage dozens of Dockerfiles, and the monorepo is often the +solution of choice to store and version them.
+We use CI/CD pipelines to help by automatically building and pushing the images to a registry, but it's often +inefficient as all the images are rebuilt at every commit/pull request. +There are possible solutions to optimize this, like changesets detection or build cache persistence to increase +efficiency, but it's not an easy task.
+Also, being able to test and validate the produced images was also something that we were looking forward to.
+DIB was created to solve these issues, and manage a large number of images in the most efficient way as possible.
+Before using DIB, there are important basic concepts to know about, to understand how it works internally.
+DIB needs a path to a root directory containing all the images it should manage. The structure of this directory is not +important, DIB will auto-discover all the Dockerfiles within it recursively.
+Example with a simple directory structure: +
images/
+├── alpine
+| └── Dockerfile
+└── debian
+ ├── bookworm
+ | └── Dockerfile
+ └── bullseye
+ └── Dockerfile
+
In order to be discovered, the Dockerfile must contain the name
label:
+
LABEL name="alpine"
+
If the name
label is missing, the image will be ignored and DIB won't manage it.
Because some images may depend on other images (when a FROM
statement references an image also defined within the
+build directory), DIB internally builds a graph of dependencies (DAG). During the build process, DIB waits until all
+parent images finish to build before building the children.
Example dependency graph: +
graph LR
+ A[alpine] --> B[nodejs];
+ B --> C[foo];
+ D[debian] --> E[bar];
+ B --> E;
+In this example, DIB will wait for the alpine
image to be built before proceeding to nodejs
, and then both
+alpine
and bullseye
can be built in parallel (see the --rate-limit
build option).
Once debian
is completed, the build of bar
begins, and as soon as nodejs
is completed, foo
follows.
DIB only builds an image when something has changed in its build context since the last build. To track the changes, +DIB computes a checksum of all the files in the context, and generates a human-readable tag out of it. If any file +changes in the build context (or in the build context of any parent image), the computed human-readable tag changes as +well.
+DIB knows it needs to rebuild an image if the target tag is not present in the registry.
+When updating images having children, DIB needs to update the tags in FROM
statements in all child images
+before running the build, to match the newly computed tag.
Example:
+Given a parent image named "parent": +
LABEL name="parent"
+
And a child image referencing the parent: +
FROM registry.example.com/parent:REPLACE_ME
+LABEL name="child"
+
When we build using the same placeholder tag: +
dib build \
+ --registry-url=registry.example.com \
+ --placeholder-tag=REPLACE_ME
+
Then any change to the parent image will be inherited by the child.
+By default, the placeholder tag is latest
.
In some cases, we want to be able to freeze the version of the parent image to a specific tag. To do so, just change the
+tag in the FROM
statement to be anything else than the placeholder tag:
+
FROM registry.example.com/parent:some-specific-tag
+LABEL name="child"
+
Then any change to the parent image will not be inherited by the child.
+DIB always tries to build and push images when it detects some changes, by it doesn't move the reference tag
+(latest
by default) to the latest version. This allows DIB to run on feature branches without interfering with
+one another. Once the changes are satisfying, just re-run DIB with the --release
flag to promote the current
+version with the reference tag.
Example workflow
+Let's assume we have a simple GitFlow setup, with CI/CD pipelines running on each commit to build docker images with DIB.
+When one creates a branch from the main branch, and commits some changes to an image. DIB builds and pushes the
+cat-south
tag, but latest
still references the same tag (beacon-two
):
gitGraph
+ commit id: "autumn-golf"
+ commit id: "beacon-two" tag: "latest"
+ branch feature
+ commit id: "cat-south"
+Once the feature branch gets merged, the cat-south
tag is promoted to latest
:
+
gitGraph
+ commit id: "autumn-golf"
+ commit id: "beacon-two"
+ branch feature
+ commit id: "cat-south"
+ checkout main
+ merge feature
+ commit id: "cat-south " tag: "latest"
+DIB is licensed under the CeCILL V2.1 License
+ + + + + + +Install the latest release on macOS or Linux with:
+go install github.com/radiofrance/dib@latest
+
Binaries are available to download from the GitHub releases page.
+Configure your shell to load DIB completions:
+To load completion run:
+. <(dib completion bash)
+
To configure your bash shell to load completions for each session add to your bashrc:
+# ~/.bashrc or ~/.bash_profile
+command -v dib >/dev/null && . <(dib completion bash)
+
If you have an alias for dib, you can extend shell completion to work with that alias:
+# ~/.bashrc or ~/.bash_profile
+alias tm=dib
+complete -F __start_dib tm
+
To configure your fish shell to load completions +for each session write this script to your completions dir:
+dib completion fish > ~/.config/fish/completions/dib.fish
+
To load completion run:
+. <(dib completion powershell)
+
To configure your powershell shell to load completions for each session add to your powershell profile:
+Windows:
+cd "$env:USERPROFILE\Documents\WindowsPowerShell\Modules"
+dib completion >> dib-completion.ps1
+
cd "${XDG_CONFIG_HOME:-"$HOME/.config/"}/powershell/modules"
+dib completion >> dib-completions.ps1
+
To load completion run:
+. <(dib completion zsh) && compdef _dib dib
+
To configure your zsh shell to load completions for each session add to your zshrc:
+# ~/.zshrc or ~/.profile
+command -v dib >/dev/null && . <(dib completion zsh) && compdef _dib dib
+
or write a cached file in one of the completion directories in your ${fpath}:
+echo "${fpath// /\n}" | grep -i completion
+dib completion zsh > _dib
+
+mv _dib ~/.oh-my-zsh/completions # oh-my-zsh
+mv _dib ~/.zprezto/modules/completion/external/src/ # zprezto
+
This guide will show you the basics of DIB. You will build a set of images locally using the local docker daemon.
+Before using DIB, ensure you have the following dependencies installed:
+Then, you need to install the DIB command-line by following the installation guide.
+Make sure you have authenticated access to an OCI registry, in this guide we'll assume it is registry.example.com
.
Let's create a root directory containing 2 Dockerfiles in their own subdirectories. +The structure will look like: +
docker/
+├── base
+| └── Dockerfile
+└── child
+ └── Dockerfile
+
Now create the dockerfile for the base
image:
+
# docker/base/Dockerfile
+FROM alpine:latest
+
+LABEL name="base"
+
The "name" label is mandatory, it is used by DIB to name the current image, by appending the value of the label to the
+registry URL. In this case, the image name is registry.example.com/base
.
Then, create the dockerfile for the child
image, which extends the base
image:
+
# docker/child/Dockerfile
+FROM registry.example.com/base:latest
+
+LABEL name="child"
+
Tip
+The directory structure does not matter to DIB. It builds the graph of dependencies based on the FROM statements. +You can have either flat directory structure like shown above, or embed child images context directories +in the parent context.
+See the configuration section
+For this guide, we'll use a configuration file as it is the more convenient way for day-to-day usage.
+Let's create a .dib.yaml
next to the docker build directory:
+
docker/
+├── base/
+├── child/
+└── .dib.yaml
+
Edit the file to set the registry name, used to pull and push DIB-managed images. +
registry_url: registry.example.com
+
You can check everything is correct by running dib list
:
+
$ dib list
+Using config file: docs/examples/.dib.yaml
+ NAME HASH
+ base august-berlin-blossom-magnesium
+ child gee-minnesota-maryland-robin
+
You should get the output containing the list of images that DIB has discovered.
+When you have all your images definitions in the build directory and configuration set up, you can proceed to building +the images: +
$ dib build
+...
+
When it's done, you can run the build command again, and you'll see that DIB does nothing as long as the Dockerfiles +remain unchanged.
+When you are ready to promote the images to latest
, run:
+
$ dib build --release
+
DIB generates reports after each build.
+By default, the reports are generated in the reports
directory. You can change it by setting the
+--reports-dir
option to another location.
The HTML report is the one you are going to use the most. +Just click on the link displayed on the DIB output to browse the report.
+In the report you'll find:
+Preview:
+ +Test executors generate reports in jUnit format. +They can then be parsed in a CI pipeline and displayed in a user-friendly fashion.
+ + + + + + +DIB is still a work in progress, but we plan to release a stable version (v1.0.0) after we have added the +following features:
+And more...
+ + + + + + +DIB is a tool designed to help build multiple Docker images defined within a directory, possibly having dependencies with one another, in a single command.
Warning
DIB is still at an early stage, development is still ongoing and new minor releases may bring some breaking changes. This may occur until we release the v1.
"},{"location":"#purpose","title":"Purpose","text":"As containers have become the standard software packaging technology, we have to deal with an ever-increasing number of image definitions. In DevOps teams especially, we need to manage dozens of Dockerfiles, and the monorepo is often the solution of choice to store and version them.
We use CI/CD pipelines to help by automatically building and pushing the images to a registry, but it's often inefficient as all the images are rebuilt at every commit/pull request. There are possible solutions to optimize this, like changesets detection or build cache persistence to increase efficiency, but it's not an easy task.
Also, being able to test and validate the produced images was also something that we were looking forward to.
DIB was created to solve these issues, and manage a large number of images in the most efficient way as possible.
"},{"location":"#concepts","title":"Concepts","text":"Before using DIB, there are important basic concepts to know about, to understand how it works internally.
"},{"location":"#build-directory","title":"Build Directory","text":"DIB needs a path to a root directory containing all the images it should manage. The structure of this directory is not important, DIB will auto-discover all the Dockerfiles within it recursively.
Example with a simple directory structure:
images/\n\u251c\u2500\u2500 alpine\n| \u2514\u2500\u2500 Dockerfile\n\u2514\u2500\u2500 debian\n \u251c\u2500\u2500 bookworm\n | \u2514\u2500\u2500 Dockerfile\n \u2514\u2500\u2500 bullseye\n \u2514\u2500\u2500 Dockerfile\n
In order to be discovered, the Dockerfile must contain the name
label:
LABEL name=\"alpine\"\n
If the name
label is missing, the image will be ignored and DIB won't manage it.
Because some images may depend on other images (when a FROM
statement references an image also defined within the build directory), DIB internally builds a graph of dependencies (DAG). During the build process, DIB waits until all parent images finish to build before building the children.
Example dependency graph:
graph LR\n A[alpine] --> B[nodejs];\n B --> C[foo];\n D[debian] --> E[bar];\n B --> E;
In this example, DIB will wait for the alpine
image to be built before proceeding to nodejs
, and then both alpine
and bullseye
can be built in parallel (see the --rate-limit
build option).
Once debian
is completed, the build of bar
begins, and as soon as nodejs
is completed, foo
follows.
DIB only builds an image when something has changed in its build context since the last build. To track the changes, DIB computes a checksum of all the files in the context, and generates a human-readable tag out of it. If any file changes in the build context (or in the build context of any parent image), the computed human-readable tag changes as well.
DIB knows it needs to rebuild an image if the target tag is not present in the registry.
"},{"location":"#placeholder-tag","title":"Placeholder Tag","text":"When updating images having children, DIB needs to update the tags in FROM
statements in all child images before running the build, to match the newly computed tag.
Example:
Given a parent image named \"parent\":
LABEL name=\"parent\"\n
And a child image referencing the parent:
FROM registry.example.com/parent:REPLACE_ME\nLABEL name=\"child\"\n
When we build using the same placeholder tag:
dib build \\\n--registry-url=registry.example.com \\\n--placeholder-tag=REPLACE_ME\n
Then any change to the parent image will be inherited by the child. By default, the placeholder tag is latest
.
In some cases, we want to be able to freeze the version of the parent image to a specific tag. To do so, just change the tag in the FROM
statement to be anything else than the placeholder tag:
FROM registry.example.com/parent:some-specific-tag\nLABEL name=\"child\"\n
Then any change to the parent image will not be inherited by the child.
"},{"location":"#tag-promotion","title":"Tag promotion","text":"DIB always tries to build and push images when it detects some changes, by it doesn't move the reference tag (latest
by default) to the latest version. This allows DIB to run on feature branches without interfering with one another. Once the changes are satisfying, just re-run DIB with the --release
flag to promote the current version with the reference tag.
Example workflow
Let's assume we have a simple GitFlow setup, with CI/CD pipelines running on each commit to build docker images with DIB.
When one creates a branch from the main branch, and commits some changes to an image. DIB builds and pushes the cat-south
tag, but latest
still references the same tag (beacon-two
):
gitGraph\n commit id: \"autumn-golf\"\n commit id: \"beacon-two\" tag: \"latest\"\n branch feature\n commit id: \"cat-south\"
Once the feature branch gets merged, the cat-south
tag is promoted to latest
:
gitGraph\n commit id: \"autumn-golf\"\n commit id: \"beacon-two\"\n branch feature\n commit id: \"cat-south\"\n checkout main\n merge feature\n commit id: \"cat-south \" tag: \"latest\"
"},{"location":"#license","title":"License","text":"DIB is licensed under the CeCILL V2.1 License
"},{"location":"backends/","title":"Build Backends","text":"The build backend is a software or service responsible for actually building the images. DIB itself is not capable of building images, it delegates this part to the build backend.
DIB supports multiple build backends. Currently, available backends are docker
and kaniko
. You can select the backend to use with the --backend
option.
Executor compatibility matrix
Backend Local Docker Kubernetes Docker \u2714 \u2717 \u2717 Kaniko \u2717 \u2714 \u2714"},{"location":"backends/#docker","title":"Docker","text":"The docker
backend uses Docker behind the scenes, and runs docker build
You need to have the Docker CLI installed locally to use this backend.
Authentication
The Docker Daemon requires authentication to pull and push images from private registries. Run the docker login
command to authenticate.
Authentication settings are stored in a config.json
file located by default in $HOME/.docker/
. If you need to provide a different configuration, you can set the DOCKER_CONFIG
variable to the path to another directory, which should contain a config.json
file.
Remote Daemon
If you want to set a custom docker daemon host, you can set the DOCKER_HOST
environment variable. The builds will then run on the remote host instead of using the local Docker daemon.
BuildKit
If available, DIB will try to use the BuildKit engine to build images, which is faster than the default Docker build engine.
"},{"location":"backends/#kaniko","title":"Kaniko","text":"Kaniko offers a way to build container images inside a container or Kubernetes cluster, without the security tradeoff of running a docker daemon container with host privileges.
BuildKit
As Kaniko must run in a container, it requires Docker when running local builds as it uses the docker
executor.
See the kaniko
section in the configuration reference.
As DIB only rebuilds images when something changes in the build context (including the Dockerfile), external dependencies should always be pinned to a specific version, so upgrading the dependency triggers a rebuild.
Example:
RUN apt-get install package@1.0.0\n
"},{"location":"best-practices/#use-dockerignore","title":"Use .dockerignore","text":"The .dockerignore
lists file patterns that should not be included in the build context. DIB also ignores those files when it computes the checksum, so no rebuild is triggered when they are modified.
---\n# Log level: \"trace\", \"debug\", \"info\", \"warning\", \"error\", \"fatal\", \"panic\". Defaults to \"info\".\nlog_level: info\n\n# URL of the registry where the images should be stored.\n#\n# DIB will use the local docker configuration to fetch metadata about existing images. You may use the DOCKER_CONFIG\n# environment variable to set a custom docker config path.\n# See the official Docker documentation (https://docs.docker.com/engine/reference/commandline/cli/#configuration-files).\n# The build backend must also be authenticated to have permission to push images.\nregistry_url: registry.example.org\n\n# The placeholder tag DIB uses to mark which images are the reference. Defaults to \"latest\".\n# Change this value if you don't want to use \"latest\" tags, or if images may be tagged \"latest\" by other sources.\nplaceholder_tag: latest\n\n# The rate limit can be increased to allow parallel builds. This dramatically reduces the build times\n# when using the Kubernetes executor as build pods are scheduled across multiple nodes.\nrate_limit: 1\n\n# Path to the directory where the reports are generated. The directory will be created if it doesn't exist.\nreports_dir: reports\n\n# The build backend. Can either be set to \"docker\" or \"kaniko\".\n#\n# Note: the kaniko backend must be run in a containerized environment such as Docker or Kubernetes.\n# See the \"executor\" section below.\nbackend: docker\n\n# Kaniko settings. Required only if using the Kaniko build backend.\nkaniko:\n# The build context directory has to be uploaded somewhere in order for the Kaniko pod to retrieve it,\n# when using remote executor (Kuberentes or remote docker host). Currently, only AWS S3 is supported.\ncontext:\n# Store the build context in an AWS S3 bucket.\ns3:\nbucket: my-bucket\nregion: eu-west-3\n# Executor configuration. It is only necessary to provide valid configurations for all of them,\n# just pick one up according to your needs.\nexecutor:\n# Configuration for the \"docker\" executor.\ndocker:\nimage: eu.gcr.io/radio-france-k8s/kaniko:latest\n# Configuration for the \"kubernetes\" executor.\nkubernetes:\nnamespace: kaniko\nimage: eu.gcr.io/radio-france-k8s/kaniko:latest\n# References a secret containing the Docker configuration file used to authenticate to the registry.\ndocker_config_secret: docker-config-prod\nenv_secrets:\n# Additional Secret mounted as environment variables.\n# Used for instance to download the build context from AWS S3.\n- aws-s3-secret\ncontainer_override: |\nresources:\nlimits:\ncpu: 2\nmemory: 8Gi\nrequests:\ncpu: 1\nmemory: 2Gi\npod_template_override: |\nspec:\naffinity:\nnodeAffinity:\nrequiredDuringSchedulingIgnoredDuringExecution:\nnodeSelectorTerms:\n- matchExpressions:\n- key: kops.k8s.io/instancegroup\noperator: In\nvalues:\n- spot-instances\n\n# Enable test suites execution after each image build.\ninclude_tests:\n# Enable Goss tests. See the \"goss\" configuration section below.\n# To test an image, place a goss.yml file in its build context.\n# Learn more about Goss: https://github.com/goss-org/goss\n- goss\n# Enable trivy vulnerability scans. See the \"trivy\" configuration section below.\n# Learn more about Trivy: https://aquasecurity.github.io/trivy\n- trivy\n\ngoss:\nexecutor:\n# Kubernetes executor configuration. Required when using the kubernetes build executor.\nkubernetes:\nenabled: true\nnamespace: goss\nimage: aelsabbahy/goss:latest\nimage_pull_secrets:\n# - private-container-registry\n\ntrivy:\nexecutor:\n# Kubernetes executor configuration. Required when using the kubernetes build executor.\nkubernetes:\nenabled: true\nnamespace: trivy\nimage: ghcr.io/aquasecurity/trivy:latest\n# References a secret containing the Docker configuration file used to authenticate to the registry.\ndocker_config_secret: docker-config-ci\nimage_pull_secrets:\n# - private-container-registry\ncontainer_override: |\nresources:\nlimits:\ncpu: 2\nmemory: 3Gi\nrequests:\ncpu: 2\nmemory: 1Gi\nenv:\n- name: GOOGLE_APPLICATION_CREDENTIALS\nvalue: /credentials/gcr_service_account.json\n- name: TRIVY_TIMEOUT\nvalue: \"30m0s\"\nvolumeMounts:\n- mountPath: /credentials\nname: private-registry-credentials\nreadOnly: true\npod_template_override: |\nspec:\nvolumes:\n- name: private-registry-credentials\nsecret:\ndefaultMode: 420\nsecretName: private-registry-credentials\n\n# Easter egg: A path to a file containing a custom wordlist that will be used to\n# generate the humanized hashes for image tags. The list must contain exactly 256 words.\n# You can enable the usage of this list in each Dockerfile with a custom label :\n# LABEL dib.use-custom-hash-list=\"true\"\n# Please keep in mind each time you change this list the images using the\n# use-custom-hash-list label may see their hashes regenerated.\nhumanized_hash_list: \"\"\n# humanized_hash_list: \"custom_wordlist.txt\"\n
"},{"location":"configuration/","title":"Configuration","text":"DIB can be configured either by command-line flags, environment variables or configuration file.
The command-line flags have the highest priority, then environment variables, then config file. You can set some default values in the configuration file, and then override with environment variables of command-line flags.
"},{"location":"configuration/#command-line-flags","title":"Command-line flags","text":"Example:
dib build --registry-url=gcr.io/project\n
"},{"location":"configuration/#environment-variables","title":"Environment variables","text":"DIB auto-discovers configuration from environment variables prefixed with DIB_
, followed by the capitalized, snake_cased flag name.
Example:
export DIB_REGISTRY_URL=gcr.io/project\ndib build\n
"},{"location":"configuration/#configuration-file","title":"Configuration file","text":"DIB uses a YAML configuration file in addition to command-line arguments. It will look for a file names .dib.yaml
in the current working directory. You can change the file location by setting the --config
(-c
) flag.
The YAML keys are equivalent to the flag names, in snake_case.
Example:
# .dib.yaml\nregistryUrl: gcr.io/project\n...\n
You can find more examples here. See also the reference configuration file.
"},{"location":"documentation/","title":"Documentation","text":"The documentation is generated with mkdocs
. It generates a static website in plain HTML from the Markdown files present in the docs/
directory.
We also use the Cobra built-in documentation generator for DIB commands.
"},{"location":"documentation/#local-setup","title":"Local Setup","text":"Let's set up a local Python environment and run the documentation server with live-reload.
Create a virtual env:
python -m venv venv\nsource venv/bin/activate\n
Install dependencies:
pip install -r requirements.txt\n
Generate docs of dib commands:
make docs\n
Run the mkdocs
server:
mkdocs serve\n
Go to http://localhost:8000
DIB supports multiple build executors. An executor is a platform able to run image builds and tests. Unlike the build backends which can be explicitely chosen, the executor is automatically selected depending on the type of operation (build, test), and the executors configured in the configuration file.
Build backend compatibility matrix
Executor Docker Kaniko Local \u2714 \u2717 Docker \u2717 \u2714 Kubernetes \u2717 \u2714"},{"location":"executors/#local","title":"Local","text":"Runs commands using the local exec system call. Use the --local-only
flag to force the local executor.
Runs commands in a docker container, using the docker run
command.
Creates pods in a kubernetes cluster, using the kubernetes API. DIB uses the current kube context, please make do
See an example configuration in the configuration reference section.
"},{"location":"extra-tags/","title":"Extra Tags","text":"Images managed by DIB will get tagged with the human-readable version of the computed hash. This is not very convenient in some cases, for instance if we want to tag an image with the explicit version of the contained software.
DIB allows additional tags to be definedusing a label in the Dockerfile:
LABEL dib.extra-tags=\"v1.0.0,v1.0,v1\"\n
The label may contain a coma-separated list of tags to be created when the image gets promoted with the --release
flag.
Install the latest release on macOS or Linux with:
go install github.com/radiofrance/dib@latest\n
Binaries are available to download from the GitHub releases page.
"},{"location":"install/#shell-autocompletion","title":"Shell autocompletion","text":"Configure your shell to load DIB completions:
BashFishPowershellZshTo load completion run:
. <(dib completion bash)\n
To configure your bash shell to load completions for each session add to your bashrc:
# ~/.bashrc or ~/.bash_profile\ncommand -v dib >/dev/null && . <(dib completion bash)\n
If you have an alias for dib, you can extend shell completion to work with that alias:
# ~/.bashrc or ~/.bash_profile\nalias tm=dib\ncomplete -F __start_dib tm\n
To configure your fish shell to load completions for each session write this script to your completions dir:
dib completion fish > ~/.config/fish/completions/dib.fish\n
To load completion run:
. <(dib completion powershell)\n
To configure your powershell shell to load completions for each session add to your powershell profile:
Windows:
cd \"$env:USERPROFILE\\Documents\\WindowsPowerShell\\Modules\"\ndib completion >> dib-completion.ps1\n
Linux: cd \"${XDG_CONFIG_HOME:-\"$HOME/.config/\"}/powershell/modules\"\ndib completion >> dib-completions.ps1\n
To load completion run:
. <(dib completion zsh) && compdef _dib dib\n
To configure your zsh shell to load completions for each session add to your zshrc:
# ~/.zshrc or ~/.profile\ncommand -v dib >/dev/null && . <(dib completion zsh) && compdef _dib dib\n
or write a cached file in one of the completion directories in your ${fpath}:
echo \"${fpath// /\\n}\" | grep -i completion\ndib completion zsh > _dib\n\nmv _dib ~/.oh-my-zsh/completions # oh-my-zsh\nmv _dib ~/.zprezto/modules/completion/external/src/ # zprezto\n
"},{"location":"quickstart/","title":"Quickstart Guide","text":"This guide will show you the basics of DIB. You will build a set of images locally using the local docker daemon.
"},{"location":"quickstart/#prerequisites","title":"Prerequisites","text":"Before using DIB, ensure you have the following dependencies installed:
Then, you need to install the DIB command-line by following the installation guide.
Make sure you have authenticated access to an OCI registry, in this guide we'll assume it is registry.example.com
.
Let's create a root directory containing 2 Dockerfiles in their own subdirectories. The structure will look like:
docker/\n\u251c\u2500\u2500 base\n| \u2514\u2500\u2500 Dockerfile\n\u2514\u2500\u2500 child\n \u2514\u2500\u2500 Dockerfile\n
Now create the dockerfile for the base
image:
# docker/base/Dockerfile\nFROM alpine:latest\n\nLABEL name=\"base\"\n
The \"name\" label is mandatory, it is used by DIB to name the current image, by appending the value of the label to the registry URL. In this case, the image name is registry.example.com/base
.
Then, create the dockerfile for the child
image, which extends the base
image:
# docker/child/Dockerfile\nFROM registry.example.com/base:latest\n\nLABEL name=\"child\"\n
Tip
The directory structure does not matter to DIB. It builds the graph of dependencies based on the FROM statements. You can have either flat directory structure like shown above, or embed child images context directories in the parent context.
"},{"location":"quickstart/#configuration","title":"Configuration","text":"See the configuration section
For this guide, we'll use a configuration file as it is the more convenient way for day-to-day usage.
Let's create a .dib.yaml
next to the docker build directory:
docker/\n\u251c\u2500\u2500 base/\n\u251c\u2500\u2500 child/\n\u2514\u2500\u2500 .dib.yaml\n
Edit the file to set the registry name, used to pull and push DIB-managed images.
registry_url: registry.example.com\n
You can check everything is correct by running dib list
:
$ dib list\nUsing config file: docs/examples/.dib.yaml\n NAME HASH\n base august-berlin-blossom-magnesium\n child gee-minnesota-maryland-robin\n
You should get the output containing the list of images that DIB has discovered.
"},{"location":"quickstart/#building-the-images","title":"Building the images","text":"When you have all your images definitions in the build directory and configuration set up, you can proceed to building the images:
$ dib build\n...\n
When it's done, you can run the build command again, and you'll see that DIB does nothing as long as the Dockerfiles remain unchanged.
When you are ready to promote the images to latest
, run:
$ dib build --release\n
"},{"location":"reports/","title":"Reporting","text":"DIB generates reports after each build. By default, the reports are generated in the reports
directory. You can change it by setting the --reports-dir
option to another location.
The HTML report is the one you are going to use the most. Just click on the link displayed on the DIB output to browse the report.
In the report you'll find:
Preview:
"},{"location":"reports/#junit-reports","title":"jUnit Reports","text":"Test executors generate reports in jUnit format. They can then be parsed in a CI pipeline and displayed in a user-friendly fashion.
"},{"location":"roadmap/","title":"Roadmap","text":""},{"location":"roadmap/#road-to-v1","title":"Road to v1","text":"DIB is still a work in progress, but we plan to release a stable version (v1.0.0) after we have added the following features:
And more...
"},{"location":"tests/","title":"Tests","text":"DIB can execute tests suites to make assertions on images that it just built. This is useful to prevent regressions, and ensure everything work as expected at runtime.
"},{"location":"tests/#goss","title":"Goss","text":"Goss is a YAML-based serverspec alternative tool for validating a server\u2019s configuration. DIB runs a container from the image to test, and injects the goss binary and configuration, then execute the test itself.
To get started with goss tests, follow the steps below:
Install goss locally (for local builds only)
Follow the procedure from the official docs
Ensure the goss tests are enabled in configuration:
# .dib.yaml\ninclude_tests:\n- goss\n
Create a goss.yml
file next to the Dockerfile of the image to test
debian/\n\u251c\u2500\u2500 Dockerfile\n\u2514\u2500\u2500 goss.yml\n
Add some assertions in the goss.yml
Basic Example:
command:\n'echo \"Hello World !\"':\nexit-status: 0\nstdout:\n- 'Hello World !'\n
Read the Goss documentation to learn all possible assertions.
"},{"location":"cmd/dib/","title":"dib","text":""},{"location":"cmd/dib/#dib","title":"dib","text":"An Opinionated Docker Image Builder
"},{"location":"cmd/dib/#synopsis","title":"Synopsis","text":"Docker Image Builder helps building a complex image dependency graph
Run dib --help for more information
"},{"location":"cmd/dib/#options","title":"Options","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -h, --help help for dib\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib/#see-also","title":"SEE ALSO","text":"Run docker images builds
"},{"location":"cmd/dib_build/#synopsis","title":"Synopsis","text":"dib build will compute the graph of images, and compare it to the last built state
For each image, if any file part of its docker context has changed, the image will be rebuilt. Otherwise, dib will create a new tag based on the previous tag
dib build [flags]\n
"},{"location":"cmd/dib_build/#options","title":"Options","text":" -b, --backend string Build Backend used to run image builds. Supported backends: [docker kaniko] (default \"docker\")\n --dry-run Simulate what would happen without actually doing anything dangerous.\n --force-rebuild Forces rebuilding the entire image graph, without regarding if the target version already exists.\n -h, --help help for build\n --include-tests strings List of test runners to exclude during the test phase.\n --local-only Build docker images locally, do not push on remote registry\n --no-graph Disable generation of graph during the build process.\n --no-retag Disable re-tagging images after build. Note that temporary tags with the \"dev-\" prefix may still be pushed to the registry.\n --no-tests Disable execution of tests (unit tests, scans, etc...) after the build.\n --rate-limit int Concurrent number of builds that can run simultaneously (default 1)\n --release dib.extra-tags Enable release mode to tag all images with extra tags found in the dib.extra-tags Dockerfile labels.\n --reports-dir string Path to the directory where the reports are generated. (default \"reports\")\n
"},{"location":"cmd/dib_build/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib_build/#see-also","title":"SEE ALSO","text":"Generate the autocompletion script for the specified shell
"},{"location":"cmd/dib_completion/#synopsis","title":"Synopsis","text":"Generate the autocompletion script for dib for the specified shell. See each sub-command's help for details on how to use the generated script.
"},{"location":"cmd/dib_completion/#options","title":"Options","text":" -h, --help help for completion\n
"},{"location":"cmd/dib_completion/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib_completion/#see-also","title":"SEE ALSO","text":"Generate the autocompletion script for bash
"},{"location":"cmd/dib_completion_bash/#synopsis","title":"Synopsis","text":"Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager.
To load completions in your current shell session:
source <(dib completion bash)\n
To load completions for every new session, execute once:
"},{"location":"cmd/dib_completion_bash/#linux","title":"Linux:","text":"dib completion bash > /etc/bash_completion.d/dib\n
"},{"location":"cmd/dib_completion_bash/#macos","title":"macOS:","text":"dib completion bash > $(brew --prefix)/etc/bash_completion.d/dib\n
You will need to start a new shell for this setup to take effect.
dib completion bash\n
"},{"location":"cmd/dib_completion_bash/#options","title":"Options","text":" -h, --help help for bash\n --no-descriptions disable completion descriptions\n
"},{"location":"cmd/dib_completion_bash/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib_completion_bash/#see-also","title":"SEE ALSO","text":"Generate the autocompletion script for fish
"},{"location":"cmd/dib_completion_fish/#synopsis","title":"Synopsis","text":"Generate the autocompletion script for the fish shell.
To load completions in your current shell session:
dib completion fish | source\n
To load completions for every new session, execute once:
dib completion fish > ~/.config/fish/completions/dib.fish\n
You will need to start a new shell for this setup to take effect.
dib completion fish [flags]\n
"},{"location":"cmd/dib_completion_fish/#options","title":"Options","text":" -h, --help help for fish\n --no-descriptions disable completion descriptions\n
"},{"location":"cmd/dib_completion_fish/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib_completion_fish/#see-also","title":"SEE ALSO","text":"Generate the autocompletion script for powershell
"},{"location":"cmd/dib_completion_powershell/#synopsis","title":"Synopsis","text":"Generate the autocompletion script for powershell.
To load completions in your current shell session:
dib completion powershell | Out-String | Invoke-Expression\n
To load completions for every new session, add the output of the above command to your powershell profile.
dib completion powershell [flags]\n
"},{"location":"cmd/dib_completion_powershell/#options","title":"Options","text":" -h, --help help for powershell\n --no-descriptions disable completion descriptions\n
"},{"location":"cmd/dib_completion_powershell/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib_completion_powershell/#see-also","title":"SEE ALSO","text":"Generate the autocompletion script for zsh
"},{"location":"cmd/dib_completion_zsh/#synopsis","title":"Synopsis","text":"Generate the autocompletion script for the zsh shell.
If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:
echo \"autoload -U compinit; compinit\" >> ~/.zshrc\n
To load completions in your current shell session:
source <(dib completion zsh)\n
To load completions for every new session, execute once:
"},{"location":"cmd/dib_completion_zsh/#linux","title":"Linux:","text":"dib completion zsh > \"${fpath[1]}/_dib\"\n
"},{"location":"cmd/dib_completion_zsh/#macos","title":"macOS:","text":"dib completion zsh > $(brew --prefix)/share/zsh/site-functions/_dib\n
You will need to start a new shell for this setup to take effect.
dib completion zsh [flags]\n
"},{"location":"cmd/dib_completion_zsh/#options","title":"Options","text":" -h, --help help for zsh\n --no-descriptions disable completion descriptions\n
"},{"location":"cmd/dib_completion_zsh/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib_completion_zsh/#see-also","title":"SEE ALSO","text":"Print list of images managed by DIB
"},{"location":"cmd/dib_list/#synopsis","title":"Synopsis","text":"dib list will print a list of all Docker images managed by DIB
dib list [flags]\n
"},{"location":"cmd/dib_list/#options","title":"Options","text":" -h, --help help for list\n -o, --output string Output format (console|go-template-file)\n You can provide a custom format using go-template: like this: \"-o go-template-file=...\".\n
"},{"location":"cmd/dib_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib_list/#see-also","title":"SEE ALSO","text":"print current dib version
dib version [flags]\n
"},{"location":"cmd/dib_version/#options","title":"Options","text":" -h, --help help for version\n
"},{"location":"cmd/dib_version/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":" --build-path string Path to the directory containing all Dockerfiles to be built by dib. Every Dockerfile will be recursively \n found and added to the build graph. You can provide any subdirectory if you want to focus on a reduced set of images, \n as long as it has at least one Dockerfile in it. (default \"docker\")\n --config string config file (default is $HOME/.config/.dib.yaml)\n --hash-list-file-path string Path to custom hash list file that will be used to humanize hash\n -l, --log-level string Log level. Can be any level supported by logrus (\"info\", \"debug\", etc...) (default \"info\")\n --placeholder-tag string Tag used as placeholder in Dockerfile \"from\" statements, and replaced internally by dib during builds \n to use the latest tags from parent images. In release mode, all images will be tagged with the placeholder tag, so \n Dockerfiles are always valid (images can still be built even without using dib). (default \"latest\")\n --registry-url string Docker registry URL where images are stored. (default \"eu.gcr.io/my-test-repository\")\n
"},{"location":"cmd/dib_version/#see-also","title":"SEE ALSO","text":"DIB can execute tests suites to make assertions on images that it just built. This is useful to prevent regressions, +and ensure everything work as expected at runtime.
+Goss is a YAML-based serverspec alternative tool for validating a server’s configuration. DIB runs a container from the +image to test, and injects the goss binary and configuration, then execute the test itself.
+To get started with goss tests, follow the steps below:
+Install goss locally (for local builds only)
+Follow the procedure from the official docs
+Ensure the goss tests are enabled in configuration: +
# .dib.yaml
+include_tests:
+ - goss
+
Create a goss.yml
file next to the Dockerfile of the image to test
+
debian/
+├── Dockerfile
+└── goss.yml
+
Add some assertions in the goss.yml
+ Basic Example:
+
command:
+ 'echo "Hello World !"':
+ exit-status: 0
+ stdout:
+ - 'Hello World !'
+
Read the Goss documentation to learn all possible assertions.
+ + + + + + +