kube-mgmt
manages policies / data of Open Policy Agent
instances in Kubernetes.
Use kube-mgmt
to:
- Load policies and/or static data into OPA instance from
ConfigMap
. - Replicate Kubernetes resources including CustomResourceDefinitions (CRDs) into OPA instance.
Both OPA
and kube-mgmt
can be installed using opa-kube-mgmt Helm chart.
Follow README to install it into K8s cluster.
kube-mgmt
automatically discovers policies and JSON data
stored in ConfigMaps
in Kubernetes and loads them into OPA.
kube-mgmt
assumes a ConfigMap
contains policy or JSON data if the ConfigMap
is:
- Created in a namespace listed in the
--namespaces
option. If you specify--namespaces=*
thenkube-mgmt
will look for policies in ALL namespaces. - Labelled with
openpolicyagent.org/policy=rego
for policies - Labelled with
openpolicyagent.org/data=opa
for JSON data
Policies or data discovery and loading can be disabled using --enable-policy=false
or --enable-data=false
flags respectively.
Label names and their values can be configured using --policy-label
, --policy-value
, --data-label
, --data-value
CLI options.
When a ConfigMap
has been successfully loaded into OPA,
the openpolicyagent.org/kube-mgmt-status
annotation is set to {"status": "ok"}
.
If loading fails for some reason (e.g., because of a parse error), the
openpolicyagent.org/kube-mgmt-status
annotation is set to {"status": "error", "error": ...}
where the error
field contains details about the failure.
Data loaded out of ConfigMaps is laid out as follows:
<namespace>/<name>/<key>
For example, if the following ConfigMap was created:
kind: ConfigMap
apiVersion: v1
metadata:
name: hello-data
namespace: opa
labels:
openpolicyagent.org/data: opa
data:
x.json: |
{"a": [1,2,3,4]}
Note: "x.json" may be any key.
You could refer to the data inside your policies as follows:
data.opa["hello-data"]["x.json"].a[0] # evaluates to 1
Warning
K8s resource replication requires global cluster permission with ClusterRole
and ClusterRoleBinding
.
kube-mgmt
can be configured to replicate Kubernetes resources into OPA so that
you can express policies over an eventually consistent cache of Kubernetes
state.
Replication is enabled with the following options:
# Replicate namespace-level resources. May be specified multiple times.
--replicate=<[group/]version/resource>
# Replicate cluster-level resources. May be specified multiple times.
--replicate-cluster=<[group/]version/resource>
By default resources are replicated from all namespaces.
Use --replicate-ignore-namespaces
option to exclude particular namespaces from replication.
Kubernetes resources replicated into OPA are laid out as follows:
<replicate-path>/<resource>/<namespace>/<name> # namespace scoped
<replicate-path>/<resource>/<name> # cluster scoped
<replicate-path>
is configurable (via--replicate-path
) and defaults tokubernetes
.<resource>
is the Kubernetes resource plural, e.g.,nodes
,pods
,services
, etc.<namespace>
is the namespace of the Kubernetes resource.<name>
is the name of the Kubernetes resource.
For example, to search for services with the label "foo"
you could write:
some namespace, name
service := data.kubernetes.services[namespace][name]
service.metadata.labels["foo"]
An alternative way to visualize the layout is as single JSON document:
{
"kubernetes": {
"services": {
"default": {
"example-service": {...},
"another-service": {...},
}
}
}
}
}
The example below would replicate Deployments, Services, and Nodes into OPA:
--replicate=apps/v1beta/deployments
--replicate=v1/services
--replicate-cluster=v1/nodes
Custom Resource Definitions can also be replicated using the same --replicate
and --replicate-cluster
options.
To get started with admission control policy enforcement in Kubernetes 1.9 or later see the Kubernetes Admission Control tutorial. For older versions of Kubernetes, see Admission Control (1.7).
In the Kubernetes Admission Control tutorial, OPA is NOT running with an authorization policy configured and hence clients can read and write policies in OPA. When deploying OPA in an insecure environment, it is recommended to configure authentication
and authorization
on the OPA daemon. For an example of how OPA can be securely deployed as an admission controller see Admission Control Secure.
kube-mgmt
is a privileged component that can load policy and data into OPA.
Other clients connecting to the OPA API only need to query for policy decisions.
To load policy and data into OPA, kube-mgmt
uses the following OPA API
endpoints:
PUT v1/policy/<path>
- upserting policiesDELETE v1/policy/<path>
- deleting policiesPUT v1/data/<path>
- upserting dataPATCH v1/data/<path>
- updating and removing data
Many users configure OPA with a simple API authorization policy that restricts access to the OPA APIs:
package system.authz
# Deny access by default.
default allow = false
# Allow anonymous access to decision `data.example.response`
#
# NOTE: the specific decision differs depending on your policies.
# NOTE: depending on how callers are configured, they may only require this or the default decision below.
allow {
input.path == ["v0", "data", "example", "response"]
input.method == "POST"
}
# Allow anonymous access to default decision.
allow {
input.path == [""]
input.method == "POST"
}
# This is only used for health check in liveness and readiness probe
allow {
input.path == ["health"]
input.method == "GET"
}
# This is only used for prometheus metrics
allow {
input.path == ["metrics"]
input.method == "GET"
}
# This is used by kube-mgmt to PUT/PATCH against /v1/data and PUT/DELETE against /v1/policies.
#
# NOTE: The $TOKEN value is replaced at deploy-time with the actual value that kube-mgmt will use. This is typically done by an initContainer.
allow {
input.identity == "$TOKEN"
}
- Go language toolchain.
- just - generic command runner.
- skaffold - build and publish docker images and more,
v2.x
and above is required. - helm- package manager for k8s.
- k3d - local k8s cluster with docker registry.
This project uses just
for building, testing and running kube-mgmt
locally.
It is configured from justfile in root directory.
All available recipes can be inspected by running just
without arguments.
To release a new version - create GitHub release with corresponding tag name that follows semantic versioning convention.
As soon as tag is pushed - CI pipeline will build and publish artifacts: docker images for supported architectures and helm chart.