Skip to content

MCO-1100: enable RHEL entitlements in on-cluster layering #4312

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 56 additions & 6 deletions pkg/controller/build/assets/buildah-build.sh
Original file line number Diff line number Diff line change
@@ -5,6 +5,10 @@
# custom build pod.
set -xeuo

ETC_PKI_ENTITLEMENT_MOUNTPOINT="${ETC_PKI_ENTITLEMENT_MOUNTPOINT:-}"
ETC_PKI_RPM_GPG_MOUNTPOINT="${ETC_PKI_RPM_GPG_MOUNTPOINT:-}"
ETC_YUM_REPOS_D_MOUNTPOINT="${ETC_YUM_REPOS_D_MOUNTPOINT:-}"

build_context="$HOME/context"

# Create a directory to hold our build context.
@@ -14,12 +18,58 @@ mkdir -p "$build_context/machineconfig"
cp /tmp/dockerfile/Dockerfile "$build_context"
cp /tmp/machineconfig/machineconfig.json.gz "$build_context/machineconfig/"

# Build our image using Buildah.
buildah bud \
--storage-driver vfs \
--authfile="$BASE_IMAGE_PULL_CREDS" \
--tag "$TAG" \
--file="$build_context/Dockerfile" "$build_context"
build_args=(
--log-level=DEBUG
--storage-driver vfs
--authfile="$BASE_IMAGE_PULL_CREDS"
--tag "$TAG"
--file="$build_context/Dockerfile"
)

mount_opts="z,rw"

# If we have RHSM certs, copy them into a tempdir to avoid SELinux issues, and
# tell Buildah about them.
rhsm_path="/var/run/secrets/rhsm"
if [[ -d "$rhsm_path" ]]; then
rhsm_certs="$(mktemp -d)"
cp -r -v "$rhsm_path/." "$rhsm_certs"
chmod -R 0755 "$rhsm_certs"
build_args+=("--volume=$rhsm_certs:/run/secrets/rhsm:$mount_opts")
fi

# If we have /etc/pki/entitlement certificates, commonly used with RHEL
# entitlements, copy them into a tempdir to avoid SELinux issues, and tell
# Buildah about them.
if [[ -n "$ETC_PKI_ENTITLEMENT_MOUNTPOINT" ]] && [[ -d "$ETC_PKI_ENTITLEMENT_MOUNTPOINT" ]]; then
configs="$(mktemp -d)"
cp -r -v "$ETC_PKI_ENTITLEMENT_MOUNTPOINT/." "$configs"
chmod -R 0755 "$configs"
build_args+=("--volume=$configs:$ETC_PKI_ENTITLEMENT_MOUNTPOINT:$mount_opts")
fi

# If we have /etc/yum.repos.d configs, commonly used with Red Hat Satellite
# subscriptions, copy them into a tempdir to avoid SELinux issues, and tell
# Buildah about them.
if [[ -n "$ETC_YUM_REPOS_D_MOUNTPOINT" ]] && [[ -d "$ETC_YUM_REPOS_D_MOUNTPOINT" ]]; then
configs="$(mktemp -d)"
cp -r -v "$ETC_YUM_REPOS_D_MOUNTPOINT/." "$configs"
chmod -R 0755 "$configs"
build_args+=("--volume=$configs:$ETC_YUM_REPOS_D_MOUNTPOINT:$mount_opts")
fi

# If we have /etc/pki/rpm-gpg configs, commonly used with Red Hat Satellite
# subscriptions, copy them into a tempdir to avoid SELinux issues, and tell
# Buildah about them.
if [[ -n "$ETC_PKI_RPM_GPG_MOUNTPOINT" ]] && [[ -d "$ETC_PKI_RPM_GPG_MOUNTPOINT" ]]; then
configs="$(mktemp -d)"
cp -r -v "$ETC_PKI_RPM_GPG_MOUNTPOINT/." "$configs"
chmod -R 0755 "$configs"
build_args+=("--volume=$configs:$ETC_PKI_RPM_GPG_MOUNTPOINT:$mount_opts")
fi
Comment on lines +41 to +69
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For modularity, you could make a function that encapsulates this, only if you wanted to. This isnt really a mandatory, as your code looks good! just a suggestion:

function prepare_and_mount_dir {
	if [[ -n "$mount_point" ]] && [[ -d "$mount_point" ]]; then
		configs=$(mktemp -d)
		cp -r -v "$mount_point/." "$configs"
		chmod -R 0755 "$configs"
		build_args+=("--volume=$configs:$mount_point:$mount_opts")
	fi
}

prepare_and_mount_dir "RHSM Certs" "$rhsm_path" "rhsm_certs"
prepare_and_mount_dir "RPM-GPG Configs" "$ETC_PKI_RPM_GPG_MOUNTPOINT" "rpm_gpg_configs"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to do that, but I couldn't get it to work quite the way that I wanted it to. Admittedly, my Bash is a little rusty. But I did think of two interesting paths forward for the future:

  1. There is a Python3 interpreter available in the official Buildah image, the MCO image, and the RHCOS image. So if I were so inclined, I could re-write this in Python. That would open the door to writing unit tests around the script. Although, one doesn't strictly need Python to do unit tests since Bats exists.
  2. Use Go instead of Bash to orchestrate things. Instead of this Bash script, we would add another binary to the MCO container which would get called instead. As a starting point, this binary could do what this Bash script does, but it could eventually do so much more.


# Build our image.
buildah bud "${build_args[@]}" "$build_context"

# Push our built image.
buildah push \
103 changes: 98 additions & 5 deletions pkg/controller/build/build_controller.go
Original file line number Diff line number Diff line change
@@ -52,6 +52,17 @@ import (
"github.com/openshift/machine-config-operator/internal/clients"
)

const (
// Name of the etc-pki-entitlement secret from the openshift-config-managed namespace.
etcPkiEntitlementSecretName = "etc-pki-entitlement"

// Name of the etc-pki-rpm-gpg secret.
etcPkiRpmGpgSecretName = "etc-pki-rpm-gpg"

// Name of the etc-yum-repos-d ConfigMap.
etcYumReposDConfigMapName = "etc-yum-repos-d"
)

const (
targetMachineConfigPoolLabel = "machineconfiguration.openshift.io/targetMachineConfigPool"
// TODO(zzlotnik): Is there a constant for this someplace else?
@@ -472,6 +483,20 @@ func (ctrl *Controller) customBuildPodUpdater(pod *corev1.Pod) error {

ps := newPoolState(pool)

// We cannot solely rely upon the pod phase to determine whether the build
// pod is in an error state. This is because it is possible for the build
// container to enter an error state while the wait-for-done container is
// still running. The pod phase in this state will still be "Running" as
// opposed to error.
if isBuildPodError(pod) {
if err := ctrl.markBuildFailed(ps); err != nil {
return err
}

ctrl.enqueueMachineConfigPool(pool)
return nil
}

switch pod.Status.Phase {
case corev1.PodPending:
if !ps.IsBuildPending() {
@@ -503,6 +528,22 @@ func (ctrl *Controller) customBuildPodUpdater(pod *corev1.Pod) error {
return nil
}

// Determines if the build pod is in an error state by examining the individual
// container statuses. Returns true if a single container is in an error state.
func isBuildPodError(pod *corev1.Pod) bool {
for _, container := range pod.Status.ContainerStatuses {
if container.State.Waiting != nil && container.State.Waiting.Reason == "ErrImagePull" {
return true
}

if container.State.Terminated != nil && container.State.Terminated.ExitCode != 0 {
return true
}
}

return false
}

func (ctrl *Controller) handleConfigMapError(pools []*mcfgv1.MachineConfigPool, err error, key interface{}) {
klog.V(2).Infof("Error syncing configmap %v: %v", key, err)
utilruntime.HandleError(err)
@@ -950,17 +991,69 @@ func (ctrl *Controller) getBuildInputs(ps *poolState) (*buildInputs, error) {
return nil, fmt.Errorf("could not get MachineConfig %s: %w", currentMC, err)
}

etcPkiEntitlements, err := ctrl.getOptionalSecret(etcPkiEntitlementSecretName)
if err != nil {
return nil, err
}

etcPkiRpmGpgKeys, err := ctrl.getOptionalSecret(etcPkiRpmGpgSecretName)
if err != nil {
return nil, err
}

etcYumReposDConfigs, err := ctrl.getOptionalConfigMap(etcYumReposDConfigMapName)
if err != nil {
return nil, err
}

inputs := &buildInputs{
onClusterBuildConfig: onClusterBuildConfig,
osImageURL: osImageURL,
customDockerfiles: customDockerfiles,
pool: ps.MachineConfigPool(),
machineConfig: mc,
onClusterBuildConfig: onClusterBuildConfig,
osImageURL: osImageURL,
customDockerfiles: customDockerfiles,
pool: ps.MachineConfigPool(),
machineConfig: mc,
etcPkiEntitlementKeys: etcPkiEntitlements,
etcYumReposDConfigs: etcYumReposDConfigs,
etcPkiRpmGpgKeys: etcPkiRpmGpgKeys,
}

return inputs, nil
}

// Fetches an optional secret to inject into the build. Returns a nil error if
// the secret is not found.
func (ctrl *Controller) getOptionalSecret(secretName string) (*corev1.Secret, error) {
optionalSecret, err := ctrl.kubeclient.CoreV1().Secrets(ctrlcommon.MCONamespace).Get(context.TODO(), secretName, metav1.GetOptions{})
if err == nil {
klog.Infof("Optional build secret %q found, will include in build", secretName)
return optionalSecret, nil
}

if k8serrors.IsNotFound(err) {
klog.Infof("Could not find optional secret %q, will not include in build", secretName)
return nil, nil
}

return nil, fmt.Errorf("could not retrieve optional secret: %s: %w", secretName, err)
}

// Fetches an optional ConfigMap to inject into the build. Returns a nil error if
// the ConfigMap is not found.
func (ctrl *Controller) getOptionalConfigMap(configmapName string) (*corev1.ConfigMap, error) {
optionalConfigMap, err := ctrl.kubeclient.CoreV1().ConfigMaps(ctrlcommon.MCONamespace).Get(context.TODO(), configmapName, metav1.GetOptions{})
if err == nil {
klog.Infof("Optional build ConfigMap %q found, will include in build", configmapName)
return optionalConfigMap, nil
}

if k8serrors.IsNotFound(err) {
klog.Infof("Could not find ConfigMap %q, will not include in build", configmapName)
return nil, nil
}

return nil, fmt.Errorf("could not retrieve optional ConfigMap: %s: %w", configmapName, err)
}
Comment on lines 1022 to +1055
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another non mandatory suggestion. these can be combined into a "getOptionalResource" and just pass in a resource type, do something like, then combine the rest

if resourceType == "Secret" {
        resource, err = ctrl.kubeclient.CoreV1().Secrets(ctrlcommon.MCONamespace).Get(context.TODO(), resourceName, metav1.GetOptions{})
    } else if resourceType == "ConfigMap" {
        resource, err = ctrl.kubeclient.CoreV1().ConfigMaps(ctrlcommon.MCONamespace).Get(context.TODO(), resourceName, metav1.GetOptions{})
    }

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good idea for the future, especially since both of these helpers only really concern themselves about the existence of the resource. That would blend nicely with some future refactoring ideas that I have.


// Prepares all of the objects needed to perform an image build.
func (ctrl *Controller) prepareForBuild(inputs *buildInputs) (ImageBuildRequest, error) {
ibr := newImageBuildRequestFromBuildInputs(inputs)
288 changes: 206 additions & 82 deletions pkg/controller/build/image_build_request.go
Original file line number Diff line number Diff line change
@@ -19,6 +19,7 @@ const (
mcPoolAnnotation string = "machineconfiguration.openshift.io/pool"
machineConfigJSONFilename string = "machineconfig.json.gz"
buildahImagePullspec string = "quay.io/buildah/stable:latest"
rhelEntitlementSecret string = "etc-pki-entitlement"
)

//go:embed assets/Dockerfile.on-cluster-build-template
@@ -55,14 +56,23 @@ type ImageBuildRequest struct {
ReleaseVersion string
// An optional user-supplied Dockerfile that gets injected into the build.
CustomDockerfile string
// Has /etc/pki/entitlement
HasEtcPkiEntitlementKeys bool
// Has /etc/yum.repos.d configs
HasEtcYumReposDConfigs bool
// Has /etc/pki/rpm-gpg configs
HasEtcPkiRpmGpgKeys bool
}

type buildInputs struct {
onClusterBuildConfig *corev1.ConfigMap
osImageURL *corev1.ConfigMap
customDockerfiles *corev1.ConfigMap
pool *mcfgv1.MachineConfigPool
machineConfig *mcfgv1.MachineConfig
onClusterBuildConfig *corev1.ConfigMap
osImageURL *corev1.ConfigMap
customDockerfiles *corev1.ConfigMap
pool *mcfgv1.MachineConfigPool
machineConfig *mcfgv1.MachineConfig
etcPkiEntitlementKeys *corev1.Secret
etcPkiRpmGpgKeys *corev1.Secret
etcYumReposDConfigs *corev1.ConfigMap
}

// Constructs a simple ImageBuildRequest.
@@ -112,12 +122,15 @@ func newImageBuildRequestFromBuildInputs(inputs *buildInputs) ImageBuildRequest
}

return ImageBuildRequest{
Pool: inputs.pool.DeepCopy(),
BaseImage: newBaseImageInfo(inputs),
FinalImage: newFinalImageInfo(inputs),
ExtensionsImage: newExtensionsImageInfo(inputs),
ReleaseVersion: inputs.osImageURL.Data[releaseVersionConfigKey],
CustomDockerfile: customDockerfile,
Pool: inputs.pool.DeepCopy(),
BaseImage: newBaseImageInfo(inputs),
FinalImage: newFinalImageInfo(inputs),
ExtensionsImage: newExtensionsImageInfo(inputs),
ReleaseVersion: inputs.osImageURL.Data[releaseVersionConfigKey],
CustomDockerfile: customDockerfile,
HasEtcPkiEntitlementKeys: inputs.etcPkiEntitlementKeys != nil,
HasEtcYumReposDConfigs: inputs.etcYumReposDConfigs != nil,
HasEtcPkiRpmGpgKeys: inputs.etcPkiRpmGpgKeys != nil,
}
}

@@ -465,8 +478,6 @@ func (i ImageBuildRequest) toBuildahPod() *corev1.Pod {
RunAsGroup: &gid,
}

command := []string{"/bin/bash", "-c"}

volumeMounts := []corev1.VolumeMount{
{
Name: "machineconfig",
@@ -488,6 +499,168 @@ func (i ImageBuildRequest) toBuildahPod() *corev1.Pod {
Name: "done",
MountPath: "/tmp/done",
},
{
Name: "buildah-cache",
MountPath: "/home/build/.local/share/containers",
},
}

// Octal: 0755.
var mountMode int32 = 493

volumes := []corev1.Volume{
{
// Provides the rendered Dockerfile.
Name: "dockerfile",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: i.getDockerfileConfigMapName(),
},
},
},
},
{
// Provides the rendered MachineConfig in a gzipped / base64-encoded
// format.
Name: "machineconfig",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: i.getMCConfigMapName(),
},
},
},
},
{
// Provides the credentials needed to pull the base OS image.
Name: "base-image-pull-creds",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: i.BaseImage.PullSecret.Name,
Items: []corev1.KeyToPath{
{
Key: corev1.DockerConfigJsonKey,
Path: "config.json",
},
},
},
},
},
{
// Provides the credentials needed to push the final OS image.
Name: "final-image-push-creds",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: i.FinalImage.PullSecret.Name,
Items: []corev1.KeyToPath{
{
Key: corev1.DockerConfigJsonKey,
Path: "config.json",
},
},
},
},
},
{
// Provides a way for the "image-build" container to signal that it
// finished so that the "wait-for-done" container can retrieve the
// iamge SHA.
Name: "done",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
},
},
},
{
// This provides a dedicated place for Buildah to store / cache its
// images during the build. This seems to be required for the build-time
// volume mounts to work correctly, most likely due to an issue with
// SELinux that I have yet to figure out. Despite being called a cache
// directory, it gets removed whenever the build pod exits
Name: "buildah-cache",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
}

// If the etc-pki-entitlement secret is found, mount it into the build pod.
if i.HasEtcPkiEntitlementKeys {
mountPoint := "/etc/pki/entitlement"

env = append(env, corev1.EnvVar{
Name: "ETC_PKI_ENTITLEMENT_MOUNTPOINT",
Value: mountPoint,
})

volumeMounts = append(volumeMounts, corev1.VolumeMount{
Name: etcPkiEntitlementSecretName,
MountPath: mountPoint,
})

volumes = append(volumes, corev1.Volume{
Name: etcPkiEntitlementSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
DefaultMode: &mountMode,
SecretName: etcPkiEntitlementSecretName,
},
},
})
}

// If the etc-yum-repos-d ConfigMap is found, mount it into the build pod.
if i.HasEtcYumReposDConfigs {
mountPoint := "/etc/yum.repos.d"

env = append(env, corev1.EnvVar{
Name: "ETC_YUM_REPOS_D_MOUNTPOINT",
Value: mountPoint,
})

volumeMounts = append(volumeMounts, corev1.VolumeMount{
Name: etcYumReposDConfigMapName,
MountPath: mountPoint,
})

volumes = append(volumes, corev1.Volume{
Name: etcYumReposDConfigMapName,
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
DefaultMode: &mountMode,
LocalObjectReference: corev1.LocalObjectReference{
Name: etcYumReposDConfigMapName,
},
},
},
})
}

// If the etc-pki-rpm-gpg secret is found, mount it into the build pod.
if i.HasEtcPkiRpmGpgKeys {
mountPoint := "/etc/pki/rpm-gpg"

env = append(env, corev1.EnvVar{
Name: "ETC_PKI_RPM_GPG_MOUNTPOINT",
Value: mountPoint,
})

volumeMounts = append(volumeMounts, corev1.VolumeMount{
Name: etcPkiRpmGpgSecretName,
MountPath: mountPoint,
})

volumes = append(volumes, corev1.Volume{
Name: etcPkiRpmGpgSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
DefaultMode: &mountMode,
SecretName: etcPkiRpmGpgSecretName,
},
},
})
}

// TODO: We need pull creds with permissions to pull the base image. By
@@ -510,7 +683,7 @@ func (i ImageBuildRequest) toBuildahPod() *corev1.Pod {
// TODO: Figure out how to not hard-code this here.
Image: buildahImagePullspec,
Env: env,
Command: append(command, buildahBuildScript),
Command: []string{"/bin/bash", "-c", buildahBuildScript},
ImagePullPolicy: corev1.PullAlways,
SecurityContext: securityContext,
VolumeMounts: volumeMounts,
@@ -523,87 +696,22 @@ func (i ImageBuildRequest) toBuildahPod() *corev1.Pod {
// us to avoid parsing log files.
Name: "wait-for-done",
Env: env,
Command: append(command, waitScript),
Command: []string{"/bin/bash", "-c", waitScript},
Image: i.BaseImage.Pullspec,
ImagePullPolicy: corev1.PullAlways,
SecurityContext: securityContext,
VolumeMounts: volumeMounts,
},
},
ServiceAccountName: "machine-os-builder",
Volumes: []corev1.Volume{
{
// Provides the rendered Dockerfile.
Name: "dockerfile",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: i.getDockerfileConfigMapName(),
},
},
},
},
{
// Provides the rendered MachineConfig in a gzipped / base64-encoded
// format.
Name: "machineconfig",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: i.getMCConfigMapName(),
},
},
},
},
{
// Provides the credentials needed to pull the base OS image.
Name: "base-image-pull-creds",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: i.BaseImage.PullSecret.Name,
Items: []corev1.KeyToPath{
{
Key: corev1.DockerConfigJsonKey,
Path: "config.json",
},
},
},
},
},
{
// Provides the credentials needed to push the final OS image.
Name: "final-image-push-creds",
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: i.FinalImage.PullSecret.Name,
Items: []corev1.KeyToPath{
{
Key: corev1.DockerConfigJsonKey,
Path: "config.json",
},
},
},
},
},
{
// Provides a way for the "image-build" container to signal that it
// finished so that the "wait-for-done" container can retrieve the
// iamge SHA.
Name: "done",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{
Medium: corev1.StorageMediumMemory,
},
},
},
},
Volumes: volumes,
},
}
}

// Constructs a common metav1.ObjectMeta object with the namespace, labels, and annotations set.
func (i ImageBuildRequest) getObjectMeta(name string) metav1.ObjectMeta {
return metav1.ObjectMeta{
objectMeta := metav1.ObjectMeta{
Name: name,
Namespace: ctrlcommon.MCONamespace,
Labels: map[string]string{
@@ -615,6 +723,22 @@ func (i ImageBuildRequest) getObjectMeta(name string) metav1.ObjectMeta {
mcPoolAnnotation: "",
},
}

hasOptionalBuildInputTemplate := "machineconfiguration.openshift.io/has-%s"

if i.HasEtcPkiEntitlementKeys {
objectMeta.Annotations[fmt.Sprintf(hasOptionalBuildInputTemplate, etcPkiEntitlementSecretName)] = ""
}

if i.HasEtcYumReposDConfigs {
objectMeta.Annotations[fmt.Sprintf(hasOptionalBuildInputTemplate, etcYumReposDConfigMapName)] = ""
}

if i.HasEtcPkiRpmGpgKeys {
objectMeta.Annotations[fmt.Sprintf(hasOptionalBuildInputTemplate, etcPkiRpmGpgSecretName)] = ""
}

return objectMeta
}

// Computes the Dockerfile ConfigMap name based upon the MachineConfigPool name.
9 changes: 9 additions & 0 deletions test/e2e-techpreview/Containerfile.cowsay
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
FROM quay.io/centos/centos:stream9 AS centos
RUN dnf install -y epel-release

FROM configs AS final
COPY --from=centos /etc/yum.repos.d /etc/yum.repos.d
COPY --from=centos /etc/pki/rpm-gpg/RPM-GPG-KEY-* /etc/pki/rpm-gpg/
RUN sed -i 's/\$stream/9-stream/g' /etc/yum.repos.d/centos*.repo && \
rpm-ostree install cowsay && \
ostree container commit
6 changes: 6 additions & 0 deletions test/e2e-techpreview/Containerfile.entitled
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
FROM configs AS final

RUN rm -rf /etc/rhsm-host && \
rpm-ostree install buildah && \
ln -s /run/secrets/rhsm /etc/rhsm-host && \
ostree container commit
3 changes: 3 additions & 0 deletions test/e2e-techpreview/Containerfile.yum-repos-d
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
FROM configs AS final
RUN rpm-ostree install buildah && \
ostree container commit
291 changes: 283 additions & 8 deletions test/e2e-techpreview/helpers_test.go
Original file line number Diff line number Diff line change
@@ -1,8 +1,14 @@
package e2e_techpreview_test

import (
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
"time"
@@ -13,10 +19,13 @@ import (
"github.com/openshift/machine-config-operator/test/framework"
"github.com/openshift/machine-config-operator/test/helpers"
"github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
corev1 "k8s.io/api/core/v1"
apierrs "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/klog/v2"
"sigs.k8s.io/yaml"
)

// Identifies a secret in the MCO namespace that has permissions to push to the ImageStream used for the test.
@@ -106,24 +115,96 @@ func createSecret(t *testing.T, cs *framework.ClientSet, secret *corev1.Secret)
// Copies the global pull secret from openshift-config/pull-secret into the MCO
// namespace so that it can be used by the build processes.
func copyGlobalPullSecret(t *testing.T, cs *framework.ClientSet) func() {
globalPullSecret, err := cs.CoreV1Interface.Secrets("openshift-config").Get(context.TODO(), "pull-secret", metav1.GetOptions{})
return cloneSecret(t, cs, "pull-secret", "openshift-config", globalPullSecretCloneName, ctrlcommon.MCONamespace)
}

// Copy the entitlement certificates into the MCO namespace. If the secrets
// cannot be found, calls t.Skip() to skip the test.
//
// Registers and returns a cleanup function to remove the certificate(s) after test completion.
func copyEntitlementCerts(t *testing.T, cs *framework.ClientSet) func() {
namespace := "openshift-config-managed"
name := "etc-pki-entitlement"

_, err := cs.CoreV1Interface.Secrets(namespace).Get(context.TODO(), name, metav1.GetOptions{})
if err == nil {
return cloneSecret(t, cs, name, namespace, name, ctrlcommon.MCONamespace)
}

if apierrs.IsNotFound(err) {
t.Logf("Secret %q not found in %q, skipping test", name, namespace)
t.Skip()
return func() {}
}

t.Fatalf("could not get %q from %q: %s", name, namespace, err)
return func() {}
}

// Uses the centos stream 9 container and extracts the contents of both the
// /etc/yum.repos.d and /etc/pki/rpm-gpg directories and injects those into a
// ConfigMap and Secret, respectively. This is so that the build process will
// consume those objects as part of the build process, injecting them into the
// build context.
func injectYumRepos(t *testing.T, cs *framework.ClientSet) func() {
tempDir := t.TempDir()

yumReposPath := filepath.Join(tempDir, "yum-repos-d")
require.NoError(t, os.MkdirAll(yumReposPath, 0o755))

centosPullspec := "quay.io/centos/centos:stream9"
yumReposContents := convertFilesFromContainerImageToBytesMap(t, centosPullspec, "/etc/yum.repos.d/")
rpmGpgContents := convertFilesFromContainerImageToBytesMap(t, centosPullspec, "/etc/pki/rpm-gpg/")

configMapCleanupFunc := createConfigMap(t, cs, &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: "etc-yum-repos-d",
Namespace: ctrlcommon.MCONamespace,
},
// Note: Even though the BuildController retrieves this ConfigMap, it only
// does so to determine whether or not it is present. It does not look at
// its contents. For that reason, we can use the BinaryData field here
// because the Build Pod will use its contents the same regardless of
// whether its string data or binary data.
BinaryData: yumReposContents,
})

secretCleanupFunc := createSecret(t, cs, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "etc-pki-rpm-gpg",
Namespace: ctrlcommon.MCONamespace,
},
Data: rpmGpgContents,
})

return makeIdempotentAndRegister(t, func() {
configMapCleanupFunc()
secretCleanupFunc()
})
}

// Clones a given secret from a given namespace into the MCO namespace.
// Registers and returns a cleanup function to delete the secret upon test
// completion.
func cloneSecret(t *testing.T, cs *framework.ClientSet, srcName, srcNamespace, dstName, dstNamespace string) func() {
secret, err := cs.CoreV1Interface.Secrets(srcNamespace).Get(context.TODO(), srcName, metav1.GetOptions{})
require.NoError(t, err)

secretCopy := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: globalPullSecretCloneName,
Namespace: ctrlcommon.MCONamespace,
Name: dstName,
Namespace: dstNamespace,
},
Data: globalPullSecret.Data,
Type: globalPullSecret.Type,
Data: secret.Data,
Type: secret.Type,
}

cleanup := createSecret(t, cs, secretCopy)
t.Logf("Cloned global pull secret %q into namespace %q as %q", "pull-secret", ctrlcommon.MCONamespace, secretCopy.Name)
t.Logf("Cloned \"%s/%s\" to \"%s/%s\"", srcNamespace, srcName, dstNamespace, dstName)

return makeIdempotentAndRegister(t, func() {
cleanup()
t.Logf("Deleted global pull secret copy %q", secretCopy.Name)
t.Logf("Deleted cloned secret \"%s/%s\"", dstNamespace, dstName)
})
}

@@ -138,7 +219,9 @@ func waitForPoolToReachState(t *testing.T, cs *framework.ClientSet, poolName str
return condFunc(mcp), nil
})

require.NoError(t, err, "MachineConfigPool %q did not reach desired state", poolName)
if err != nil {
t.Fatalf("MachineConfigPool %q did not reach desired state", poolName)
}
}

// Registers a cleanup function, making it idempotent, and wiring up the
@@ -152,3 +235,195 @@ func makeIdempotentAndRegister(t *testing.T, cleanupFunc func()) func() {
t.Cleanup(out)
return out
}

// Determines where to write the build logs in the event of a failure.
// ARTIFACT_DIR is a well-known env var provided by the OpenShift CI system.
// Writing to the path in this env var will ensure that any files written to
// that path end up in the OpenShift CI GCP bucket for later viewing.
//
// If this env var is not set, these files will be written to the current
// working directory.
func getBuildArtifactDir(t *testing.T) string {
artifactDir := os.Getenv("ARTIFACT_DIR")
if artifactDir != "" {
return artifactDir
}

cwd, err := os.Getwd()
require.NoError(t, err)
return cwd
}

// Writes any ephemeral ConfigMaps that got created as part of the build
// process to a file. Also writes the build pod spec.
func writeBuildArtifactsToFiles(t *testing.T, cs *framework.ClientSet, pool *mcfgv1.MachineConfigPool) error {
dirPath := getBuildArtifactDir(t)

configmaps := []string{
"on-cluster-build-config",
"on-cluster-build-custom-dockerfile",
fmt.Sprintf("dockerfile-%s", pool.Spec.Configuration.Name),
fmt.Sprintf("mc-%s", pool.Spec.Configuration.Name),
}

for _, configmap := range configmaps {
if err := writeConfigMapToFile(t, cs, configmap, dirPath); err != nil {
return err
}
}

return writePodSpecToFile(t, cs, pool, dirPath)
}

// Writes a given ConfigMap to a file.
func writeConfigMapToFile(t *testing.T, cs *framework.ClientSet, configmapName, dirPath string) error {
cm, err := cs.CoreV1Interface.ConfigMaps(ctrlcommon.MCONamespace).Get(context.TODO(), configmapName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("could not get configmap %s: %w", configmapName, err)
}

out, err := yaml.Marshal(cm)
if err != nil {
return fmt.Errorf("could not marshal configmap %s to YAML: %w", configmapName, err)
}

filename := filepath.Join(dirPath, fmt.Sprintf("%s-%s-configmap.yaml", t.Name(), configmapName))
t.Logf("Writing configmap (%s) contents to %s", configmapName, filename)
return os.WriteFile(filename, out, 0o755)
}

// Wrttes a build pod spec to a file.
func writePodSpecToFile(t *testing.T, cs *framework.ClientSet, pool *mcfgv1.MachineConfigPool, dirPath string) error {
podName := fmt.Sprintf("build-%s", pool.Spec.Configuration.Name)

pod, err := cs.CoreV1Interface.Pods(ctrlcommon.MCONamespace).Get(context.TODO(), podName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("could not get pod %s: %w", podName, err)
}

out, err := yaml.Marshal(pod)
if err != nil {
return err
}

podFilename := filepath.Join(dirPath, fmt.Sprintf("%s-%s-pod.yaml", t.Name(), pod.Name))
t.Logf("Writing spec for pod %s to %s", pod.Name, podFilename)
return os.WriteFile(podFilename, out, 0o755)
}

// Streams the logs for all of the containers running in the build pod. The pod
// logs can provide a valuable window into how / why a given build failed.
func streamBuildPodLogsToFile(ctx context.Context, t *testing.T, cs *framework.ClientSet, pool *mcfgv1.MachineConfigPool) error {
podName := fmt.Sprintf("build-%s", pool.Spec.Configuration.Name)

pod, err := cs.CoreV1Interface.Pods(ctrlcommon.MCONamespace).Get(ctx, podName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("could not get pod %s: %w", podName, err)
}

errGroup, egCtx := errgroup.WithContext(ctx)

for _, container := range pod.Spec.Containers {
container := container
pod := pod.DeepCopy()

// Because we follow the logs for each container in a build pod, this
// blocks the current Goroutine. So we run each log stream operation in a
// separate Goroutine to avoid blocking the main Goroutine.
errGroup.Go(func() error {
return streamContainerLogToFile(egCtx, t, cs, pod, container)
})
}

return errGroup.Wait()
}

// Streams the logs for a given container to a file.
func streamContainerLogToFile(ctx context.Context, t *testing.T, cs *framework.ClientSet, pod *corev1.Pod, container corev1.Container) error {
dirPath := getBuildArtifactDir(t)

logger, err := cs.CoreV1Interface.Pods(ctrlcommon.MCONamespace).GetLogs(pod.Name, &corev1.PodLogOptions{
Container: container.Name,
Follow: true,
}).Stream(ctx)

defer logger.Close()

if err != nil {
return fmt.Errorf("could not get logs for container %s in pod %s: %w", container.Name, pod.Name, err)
}

filename := filepath.Join(dirPath, fmt.Sprintf("%s-%s-%s.log", t.Name(), pod.Name, container.Name))
file, err := os.Create(filename)
if err != nil {
return err
}

defer file.Close()

t.Logf("Streaming pod (%s) container (%s) logs to %s", pod.Name, container.Name, filename)
if _, err := io.Copy(file, logger); err != nil {
return fmt.Errorf("could not write pod logs to %s: %w", filename, err)
}

return nil
}

// Skips a given test if it is detected that the cluster is running OKD. We
// skip these tests because they're either irrelevant for OKD or would fail.
func skipOnOKD(t *testing.T) {
cs := framework.NewClientSet("")

isOKD, err := helpers.IsOKDCluster(cs)
require.NoError(t, err)

if isOKD {
t.Logf("OKD detected, skipping test %s", t.Name())
t.Skip()
}
}

// Extracts the contents of a directory within a given container to a temporary
// directory. Next, it loads them into a bytes map keyed by filename. It does
// not handle nested directories, so use with caution.
func convertFilesFromContainerImageToBytesMap(t *testing.T, pullspec, containerFilepath string) map[string][]byte {
tempDir := t.TempDir()

path := fmt.Sprintf("%s:%s", containerFilepath, tempDir)
cmd := exec.Command("oc", "image", "extract", pullspec, "--path", path)
t.Logf("Extracting files under %q from %q to %q; running %s", containerFilepath, pullspec, tempDir, cmd.String())
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
require.NoError(t, cmd.Run())

out := map[string][]byte{}

isCentosImage := strings.Contains(pullspec, "centos")

err := filepath.Walk(tempDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}

if info.IsDir() {
return nil
}

contents, err := ioutil.ReadFile(path)
if err != nil {
return err
}

if isCentosImage {
contents = bytes.ReplaceAll(contents, []byte("$stream"), []byte("9-stream"))
}

// Replace $stream with 9-stream in any of the Centos repo content we pulled.
out[filepath.Base(path)] = contents
return nil
})

require.NoError(t, err)

return out
}
145 changes: 124 additions & 21 deletions test/e2e-techpreview/onclusterbuild_test.go
Original file line number Diff line number Diff line change
@@ -2,6 +2,7 @@ package e2e_techpreview_test

import (
"context"
_ "embed"
"flag"
"strings"
"testing"
@@ -31,19 +32,29 @@ const (

// The name of the global pull secret copy to use for the tests.
globalPullSecretCloneName string = "global-pull-secret-copy"

// The custom Dockerfile content to build for the tests.
cowsayDockerfile string = `FROM quay.io/centos/centos:stream9 AS centos
RUN dnf install -y epel-release
FROM configs AS final
COPY --from=centos /etc/yum.repos.d /etc/yum.repos.d
COPY --from=centos /etc/pki/rpm-gpg/RPM-GPG-KEY-* /etc/pki/rpm-gpg/
RUN sed -i 's/\$stream/9-stream/g' /etc/yum.repos.d/centos*.repo && \
rpm-ostree install cowsay`
)

var skipCleanup bool

var (
// Provides a Containerfile that installs cowsayusing the Centos Stream 9
// EPEL repository to do so without requiring any entitlements.
//go:embed Containerfile.cowsay
cowsayDockerfile string

// Provides a Containerfile that installs Buildah from the default RHCOS RPM
// repositories. If the installation succeeds, the entitlement certificate is
// working.
//go:embed Containerfile.entitled
entitledDockerfile string

// Provides a Containerfile that works similarly to the cowsay Dockerfile
// with the exception that the /etc/yum.repos.d and /etc/pki/rpm-gpg key
// content is mounted into the build context by the BuildController.
//go:embed Containerfile.yum-repos-d
yumReposDockerfile string
)

func init() {
// Skips running the cleanup functions. Useful for debugging tests.
flag.BoolVar(&skipCleanup, "skip-cleanup", false, "Skips running the cleanup functions")
@@ -62,36 +73,84 @@ type onClusterBuildTestOpts struct {

// What MachineConfigPool name to use for the test.
poolName string

// Use RHEL entitlements
useEtcPkiEntitlement bool

// Inject YUM repo information from a Centos 9 stream container
useYumRepos bool
}

// Tests that an on-cluster build can be performed with the OpenShift Image Builder.
func TestOnClusterBuildsOpenshiftImageBuilder(t *testing.T) {
// Tests tha an on-cluster build can be performed with the Custom Pod Builder.
func TestOnClusterBuildsCustomPodBuilder(t *testing.T) {
runOnClusterBuildTest(t, onClusterBuildTestOpts{
imageBuilderType: build.OpenshiftImageBuilder,
poolName: layeredMCPName,
poolName: layeredMCPName,
customDockerfiles: map[string]string{
layeredMCPName: cowsayDockerfile,
},
})
}

// Tests tha an on-cluster build can be performed with the Custom Pod Builder.
func TestOnClusterBuildsCustomPodBuilder(t *testing.T) {
// This test extracts the /etc/yum.repos.d and /etc/pki/rpm-gpg content from a
// Centos Stream 9 image and injects them into the MCO namespace. It then
// performs a build with the expectation that these artifacts will be used,
// simulating a build where someone has added this content; usually a Red Hat
// Satellite user.
func TestYumReposBuilds(t *testing.T) {
runOnClusterBuildTest(t, onClusterBuildTestOpts{
imageBuilderType: build.CustomPodImageBuilder,
poolName: layeredMCPName,
poolName: layeredMCPName,
customDockerfiles: map[string]string{
layeredMCPName: cowsayDockerfile,
layeredMCPName: yumReposDockerfile,
},
useYumRepos: true,
})
}

// Clones the etc-pki-entitlement certificate from the openshift-config-managed
// namespace into the MCO namespace. Then performs an on-cluster layering build
// which should consume the entitlement certificates.
func TestEntitledBuilds(t *testing.T) {
skipOnOKD(t)

runOnClusterBuildTest(t, onClusterBuildTestOpts{
poolName: layeredMCPName,
customDockerfiles: map[string]string{
layeredMCPName: entitledDockerfile,
},
useEtcPkiEntitlement: true,
})
}

// Performs the same build as above, but deploys the built image to a node on
// that cluster and attempts to run the binary installed in the process (in
// this case, buildah).
func TestEntitledBuildsRollsOutImage(t *testing.T) {
skipOnOKD(t)

imagePullspec := runOnClusterBuildTest(t, onClusterBuildTestOpts{
poolName: layeredMCPName,
customDockerfiles: map[string]string{
layeredMCPName: entitledDockerfile,
},
useEtcPkiEntitlement: true,
})

cs := framework.NewClientSet("")
node := helpers.GetRandomNode(t, cs, "worker")
t.Cleanup(makeIdempotentAndRegister(t, func() {
helpers.DeleteNodeAndMachine(t, cs, node)
}))
helpers.LabelNode(t, cs, node, helpers.MCPNameToRole(layeredMCPName))
helpers.WaitForNodeImageChange(t, cs, node, imagePullspec)

t.Log(helpers.ExecCmdOnNode(t, cs, node, "chroot", "/rootfs", "buildah", "--help"))
}

// Tests that an on-cluster build can be performed and that the resulting image
// is rolled out to an opted-in node.
func TestOnClusterBuildRollsOutImage(t *testing.T) {
imagePullspec := runOnClusterBuildTest(t, onClusterBuildTestOpts{
imageBuilderType: build.OpenshiftImageBuilder,
poolName: layeredMCPName,
poolName: layeredMCPName,
customDockerfiles: map[string]string{
layeredMCPName: cowsayDockerfile,
},
@@ -111,20 +170,35 @@ func TestOnClusterBuildRollsOutImage(t *testing.T) {
// Sets up and performs an on-cluster build for a given set of parameters.
// Returns the built image pullspec for later consumption.
func runOnClusterBuildTest(t *testing.T, testOpts onClusterBuildTestOpts) string {
ctx, cancel := context.WithCancel(context.Background())
t.Cleanup(cancel)

cs := framework.NewClientSet("")

t.Logf("Running with ImageBuilder type: %s", testOpts.imageBuilderType)

// Create all of the objects needed to set up our test.
prepareForTest(t, cs, testOpts)

// Opt the test MachineConfigPool into layering to signal the build to begin.
optPoolIntoLayering(t, cs, testOpts.poolName)

t.Logf("Wait for build to start")
var pool *mcfgv1.MachineConfigPool
waitForPoolToReachState(t, cs, testOpts.poolName, func(mcp *mcfgv1.MachineConfigPool) bool {
pool = mcp
return ctrlcommon.NewLayeredPoolState(mcp).IsBuilding()
})

t.Logf("Build started! Waiting for completion...")

// The pod log collection blocks the main Goroutine since we follow the logs
// for each container in the build pod. So they must run in a separate
// Goroutine so that the rest of the test can continue.
go func() {
require.NoError(t, streamBuildPodLogsToFile(ctx, t, cs, pool))
}()

imagePullspec := ""
waitForPoolToReachState(t, cs, testOpts.poolName, func(mcp *mcfgv1.MachineConfigPool) bool {
lps := ctrlcommon.NewLayeredPoolState(mcp)
@@ -134,6 +208,7 @@ func runOnClusterBuildTest(t *testing.T, testOpts onClusterBuildTestOpts) string
}

if lps.IsBuildFailure() {
require.NoError(t, writeBuildArtifactsToFiles(t, cs, pool))
t.Fatalf("Build unexpectedly failed.")
}

@@ -189,6 +264,7 @@ func optPoolIntoLayering(t *testing.T, cs *framework.ClientSet, pool string) fun
// - Gets the Docker Builder secret name from the MCO namespace.
// - Creates the imagestream to use for the test.
// - Clones the global pull secret into the MCO namespace.
// - If requrested, clones the RHEL entitlement secret into the MCO namespace.
// - Creates the on-cluster-build-config ConfigMap.
// - Creates the target MachineConfigPool and waits for it to get a rendered config.
// - Creates the on-cluster-build-custom-dockerfile ConfigMap.
@@ -199,14 +275,39 @@ func prepareForTest(t *testing.T, cs *framework.ClientSet, testOpts onClusterBui
pushSecretName, err := getBuilderPushSecretName(cs)
require.NoError(t, err)

// If the test requires RHEL entitlements, clone them from
// "etc-pki-entitlement" in the "openshift-config-managed" namespace.
// If we want to use RHEL entit
if testOpts.useEtcPkiEntitlement {
t.Cleanup(copyEntitlementCerts(t, cs))
}

// If the test requires /etc/yum.repos.d and /etc/pki/rpm-gpg, pull a Centos
// Stream 9 container image and populate them from there. This is intended to
// emulate the Red Hat Satellite enablement process, but does not actually
// require any Red Hat Satellite creds to work.
if testOpts.useYumRepos {
t.Cleanup(injectYumRepos(t, cs))
}

// Creates an imagestream to push the built OS image to. This is so that the
// test may be self-contained within the test cluster.
imagestreamName := "os-image"
t.Cleanup(createImagestream(t, cs, imagestreamName))

// Default to the custom pod builder image builder type.
if testOpts.imageBuilderType == "" {
testOpts.imageBuilderType = build.CustomPodImageBuilder
}

// Copy the global pull secret into the MCO namespace.
t.Cleanup(copyGlobalPullSecret(t, cs))

// Get the final image pullspec from the imagestream that we just created.
finalPullspec, err := getImagestreamPullspec(cs, imagestreamName)
require.NoError(t, err)

// Set up the on-cluster-build-config ConfigMap.
cmCleanup := createConfigMap(t, cs, &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: build.OnClusterBuildConfigMapName,
@@ -222,10 +323,13 @@ func prepareForTest(t *testing.T, cs *framework.ClientSet, testOpts onClusterBui

t.Cleanup(cmCleanup)

// Create the MachineConfigPool that we intend to target for the test.
t.Cleanup(makeIdempotentAndRegister(t, helpers.CreateMCP(t, cs, testOpts.poolName)))

// Create the on-cluster-build-custom-dockerfile ConfigMap.
t.Cleanup(createCustomDockerfileConfigMap(t, cs, testOpts.customDockerfiles))

// Wait for our targeted MachineConfigPool to get a base MachineConfig.
_, err = helpers.WaitForRenderedConfig(t, cs, testOpts.poolName, "00-worker")
require.NoError(t, err)
}
@@ -239,7 +343,6 @@ func TestSSHKeyAndPasswordForOSBuilder(t *testing.T) {

// prepare for on cluster build test
prepareForTest(t, cs, onClusterBuildTestOpts{
imageBuilderType: build.OpenshiftImageBuilder,
poolName: layeredMCPName,
customDockerfiles: map[string]string{},
})