Codebase |
One codebase tracked in revision control, many deploys |
Track your code using a VCS repository such as a git, then store specific deployable code artifacts as images in the Image Registry. In this way, you might have multiple versions of your application running in different k8s resources. |
Dependencies |
Explicitly declare and isolate dependencies |
Both of these requirements are handily satisfied by containerization technology. Image definition contains declaration of all necessary dependencies with proper isolation from entire system. |
Config |
Store config in the environment |
You can specify that environment variables or configuration files should be populated by the contents of k8s ConfigMaps or Secrets, which can be kept separate from the application or deployment k8s resources. Also, for security reasons, secret information can be managed by cloud provided services or services like HashiCorp Vault. |
Backing services |
Treat backing services as attached resources |
Pods never communicate directly with one another, they got through a Service, because Pods are volatile and short-lived across the cluster. Pods may also reboot, scale out, or even be destroyed. In each of these cases, the Pod IP changes, as well as its name. The Service is a k8s resource located in front of the Pods and allowing for some ports of the Pods on the network. Services have a fixed name and a fixed dedicated IP. The matching between Services and Pods relies on labels. When the Service matches several Pods, it load-balances the traffic with a round-robin algorithm. |
Build, release, run |
Strictly separate build and run stages |
Releases should be identifiable and also be immutable. All k8s resources can be written as a YAML\JSON files. The resource creation from a file is done with CLI or API server. All created resources are immutable and any modification creates new revision. The ability to automatically run the application — or multiple copies of the application — is precisely what k8s constructs like Deployment, ReplicaSet, DaemonSet, StatefulSet and JobSet do. |
Processes |
Execute the app as one or more stateless processes |
Stateless processes are a core idea behind cloud native applications. That means that any time you need to persist information, it needs to be stored in a backing service such as a database, object storage or persistent volume. Developers are used to “sticky” sessions, storing information in the session with the confidence that the next request will come to the same server. In a cloud application, however, you must never make that assumption. |
Port binding |
Export services via port binding |
Applications should not be depend on an additional processes in local environment. Remember, every function should be in its own process, isolated from everything else in separate container. In a k8s-based app, this is done trough the architecture of the application itself and by making sure that the application has all of its dependencies as part of the creation of the base containers on which the application is built and has reference. |
Concurrency |
Scale out via the process model |
When you’re writing an app, make sure that you’re designing it to be scaled out, rather than scaled up. It means that in order to add more capacity, you should be able to add more instances rather than more memory or CPU to the machine on which the app is running. Note that this specifically means being able to start additional processes on additional machines, which is, fortunately, a key capability of Kubernetes. |
Disposability |
Maximize robustness with fast startup and graceful shutdown |
The idea that processes should be disposable means that at any time an application can die and the user won’t be affected, either because there are others to take its place, because it’ll start right up again. Containers are built on this principle, and Kubernetes structures that with managing instances and maintain an availability and self-heal even in the face of issues. K8s uses Pod’s liveliness and readiness probes to monitor health state of observed container. |
Logs |
Treat logs as event streams |
While most traditional applications store logs in local files, cloud native app directs it, instead, to stdout as a stream of events. It’s the execution environment that’s responsible for collecting those events. That might be as simple as redirecting stdout to a file, but in most cases it involves using a log router such as Fluentd and saving the logs to a services such as EFK stack. In K8s, you have at choices for automatic logging capture: cloud-provided services such as Stackdriver, CloudWatch, Log Analytics or using Elasticsearch if you want on-premise solution. |
Admin processes |
Run admin/management tasks as one-off processes |
This principle involves separating admin tasks such as migrating a DB or inspecting records from the rest of the application. Even though they’re separate, they must still be run in the same environment and against the same code and configuration as the application, and their code must be shipped alongside the application to prevent drift. You can implement this a number of different ways in K8s-based applications, depending on the size and scale of the application itself. For example, for small tasks, you might use kubectl exec to operate on a specific container, or you can use the K8s JobSet to run a self-contained application. For more complicated tasks that involve orchestration of changes, you can also use Kubernetes Helm package charts. |
Dev/prod parity |
Keep development, staging, and production as similar as possible |
On the surface level, it means that you should have identical development, staging, and production environments, as much as it’s possible. The way to accomplish this is through the use of Kubernetes namespaces, enabling you to (theoretically) run code on the same actual cluster against the same actual systems while still keeping environments separate. In some situations, you can also use tools such as Minikube or kubeadm-dind-cluster to create near-clones of production systems. At a deeper level, it’s about three different types of everyday issues: The time gap: A developer may work on code that takes days, weeks, or even months to go into production. The personnel gap: Developers write code, ops engineers deploy it. The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux. The goal is to create a CI\CD where changes can go into ‘production’ virtually immediately (after testing, of course!), deployed by the developers who wrote it so they can actually see it in ‘production’, using the same tools on which the code was actually written in order to minimize the possibility of compatibility errors between environments.Some of these factors are outside of the realm of Kubernetes, of course; the personnel gap, for example. The time and tools gaps, however, can be helped in two ways. The time gap could be solved in Kubernetes-based applications with containers, which themselves are based on images that are stored in version-control systems, so they lend themselves to CI/CD. They can also be updated via rolling updates that can be rolled back in case of problems, so they’re well-suited to this kind of environment. As far as the tools gap is concerned, the architecture of Kubernetes-based applications make it easier to manage, both by making local dependencies simple to include in the various images, and by modularizing the application in such a way that external backing services can be standardized. |