Skip to content

Conversation

@sheurich
Copy link
Contributor

@sheurich sheurich commented Sep 9, 2025

Changes

Build System

  • tools/container-build.sh: Removed forced amd64 cross-compilation on ARM hosts. Builds now run natively for host architecture (amd64 or arm64). Override with DOCKER_DEFAULT_PLATFORM.
  • tools/make-deb.sh: Package architecture now determined by ARCH environment variable.
  • Containerfile: Added TARGETPLATFORM, BUILDER_BASE, and FINAL_BASE build args for cross-platform Go downloads and configurable base images.

CI/CD Workflows

  • release.yml: Split into 3 jobs (build-artifacts, create-release, push-images). Matrix builds amd64 on ubuntu-24.04 and arm64 on ubuntu-24.04-arm. Creates multi-platform manifest for GHCR images.
  • try-release.yml: Added matrix to test both architectures.
  • .dockerignore/.gitignore: Added .github directory and build artifacts.

Versioning

  • Changed version scheme from ${GO_VERSION}.$(date +%s) to ${GO_VERSION}.${COMMIT_TIMESTAMP} for reproducible builds.

Image Tags

  • Repository owner now uses ${{ github.repository_owner }} instead of hardcoded letsencrypt.
  • Architecture-specific tags: boulder:${VERSION}-amd64, boulder:${VERSION}-arm64
  • Generic tags preserved: boulder:${VERSION}, boulder

Artifacts

  • .deb packages: boulder-${VERSION}-${COMMIT_ID}.amd64.deb, .arm64.deb
  • Tarballs: boulder-${VERSION}-${COMMIT_ID}.amd64.tar.gz, .arm64.tar.gz
  • Container images: Multi-platform manifest with both architectures

Testing

# Native build (amd64 or arm64)
GO_VERSION=1.24.6 ./tools/container-build.sh

# Force specific architecture
DOCKER_DEFAULT_PLATFORM=linux/amd64 GO_VERSION=1.24.6 ./tools/container-build.sh

@sheurich sheurich requested a review from a team as a code owner September 9, 2025 17:04
@sheurich sheurich requested a review from jprenken September 9, 2025 17:04
@sheurich
Copy link
Contributor Author

sheurich commented Sep 9, 2025

This PR introduces a breaking change in artifact naming that we should discuss:

Current Change:

  • Before: boulder-1.25.0.xxx-commit.x86_64.tar.gz and boulder-1.25.0.xxx-commit.x86_64.deb
  • After: boulder-1.25.0.xxx-commit.amd64.tar.gz and boulder-1.25.0.xxx-commit.amd64.deb

The Question:

Should we maintain backward compatibility by keeping x86_64 naming for AMD64 artifacts?

Considerations:

Arguments for standardized naming (amd64):

  • Consistent with Docker/Debian conventions
  • Cleaner, more predictable naming scheme

Arguments for backward compatibility (x86_64):

  • Won't break existing CI/CD pipelines
  • Won't break download scripts expecting current names

Potential Impact:

  • Any automation that downloads artifacts by name
  • CI/CD systems that expect specific filename patterns
  • Documentation referencing artifact names

Implementation Options:

  1. Keep current PR as-is (breaking change, but cleaner)
  2. Preserve x86_64 naming for AMD64 while using arm64 for ARM
  3. Add both naming schemes temporarily with deprecation timeline

What's your preference? Will any existing systems be impacted by this naming change?

jprenken
jprenken previously approved these changes Sep 9, 2025
Copy link
Contributor

@jprenken jprenken left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great to me.

I slightly favour moving to standardized naming (amd64). We'll probably need to change a few things internally, but just a few. I think that's worth it.

Between this and #8386, whichever merges last should be modified to (ideally) handle uploading both architectures' images when tagging a release, or at least make sure builds are in amd64 as a temporary quick fix.

Copy link
Contributor

@jsha jsha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, thanks for putting in time to make the dev experience better @sheurich !

I'm fine with changing the platform name; we can update our build scripts in prod pretty easily.

Comment on lines 43 to 46
--tag "boulder:${VERSION}-${ARCH}" \
--tag "boulder:${VERSION}" \
--tag "boulder:${COMMIT_ID}" \
--tag boulder \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't give a ton of thought to these tags when we first wrote container-build.sh. Looking at them now it seems that there will be collisions if we start uploading multiple architectures from this script. For instance, for any given version boulder:${VERSION} could be one arch or another.

Probably a single --tag "boulder:${VERSION}-${ARCH}" would suffice and keep things simple. What do you think @jprenken ?

@sheurich I see that you repeat the string "boulder:${VERSION}-${ARCH}" here and on the two docker run commands below. Let's put that into a single TAG env var and use that in each location for consistency.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated this. A follow-on step involving docker buildx imagetools create -t letsencrypt/boulder:${VERSION} letsencrypt/boulder:${VERSION}-amd64 letsencrypt/boulder:${VERSION}-arm64 will be needed to glue the images into a single multi-arch.

Comment on lines 18 to 28
# Determine architecture - use ARCH env var if set, otherwise detect from uname
if [ -n "${ARCH:-}" ]; then
DEB_ARCH="${ARCH}"
else
case "$(uname -m)" in
"x86_64") DEB_ARCH="amd64" ;;
"aarch64"|"arm64") DEB_ARCH="arm64" ;;
*) echo "Unsupported architecture: $(uname -m)" && exit 1 ;;
esac
fi

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make-deb.sh is a transitional tool to keep building .debs until we transition to running containers.

I think it's okay to assume $ARCH will always be set, because the only place we call it from is container-build.sh, which sets it.

@jprenken
Copy link
Contributor

Between this and #8386, whichever merges last should be modified to (ideally) handle uploading both architectures' images when tagging a release, or at least make sure builds are in amd64 as a temporary quick fix.

#8386 just merged, so if possible, let's get that into this PR. Sorry about the hassle!

@sheurich
Copy link
Contributor Author

@jprenken et al I think this last set of changes should address the feedback and adds full multi-arch image builds in ghcr.io.

@sheurich sheurich requested review from jprenken and jsha September 11, 2025 21:51
jprenken
jprenken previously approved these changes Sep 12, 2025
@sheurich
Copy link
Contributor Author

The comments in the try-release and release workflows indicate a single Go version is used for release and multiple versions are possible for try-release. Is this still correct? I would like to externalize the GO_VERSION to simplify the workflows.

If both release and try-release will only need one version, either:

  • Read the version specified in go.mod.
    OR
  • Use the convention of a .go-version file in the repo root with contents like:
1.25.0

If one or both of the workflows depend on having multiple versions:

  • Use a .github/go-versions.json file with contents like:
{
  "versions": ["1.25.0", "1.24.6"]
}

The release workflow can enforce a single version by choosing only the first entry if desired.

jprenken
jprenken previously approved these changes Sep 23, 2025
@aarongable
Copy link
Contributor

The comments in the try-release and release workflows indicate a single Go version is used for release and multiple versions are possible for try-release. Is this still correct? I would like to externalize the GO_VERSION to simplify the workflows.

Yes, its important that the try-release build target multiple Go versions. We've had this breakage in the past (i.e. we were testing CI for an upcoming version of Go, but when we went to make that the default, the release build was broken). We've even had situations in which we want the real release build to be producing multiple versions, because we aren't sure whether prod is going to update to the new go version before or after the new boulder version is deployed, or because RVAs are running a different go version from on-prem services.

If both release and try-release will only need one version, either:

  • Read the version specified in go.mod.

Even if one version were acceptable, this would be a semantically messy solution. The version indicated in go.mod should only be updated when the minimum version of the stdlib required by the project's code moves forward; i.e. it should represent a required minimum version, not a target version.

  • Use a .github/go-versions.json file with contents like:
{
  "versions": ["1.25.0", "1.24.6"]
}

Is this technique -- externalizing a workflow's matrix to a data file -- widely used? Is it natively supported by github actions, or would we need to add a bunch of workflow steps to read this file? If there's not native support, doesn't that mean that we'd have to bundle basically the whole github action into a script that can loop over the values in that data file, rather than using the built-in "matrix" support to do all the steps multiple times?

@sheurich
Copy link
Contributor Author

@aarongable thanks for the feedback! I understand your concerns and will move the discussion of Go build target versioning to a new issue. This PR already accomplishes the multi-architecture build goal and I don't want to hold it up on this tangential point.

sheurich and others added 8 commits October 21, 2025 13:50
- Added TARGETPLATFORM argument to Containerfile for architecture-specific builds.
- Updated container-build.sh to detect architecture and set appropriate platform.
- Modified make-deb.sh to dynamically set the architecture in the .deb package.
This improves the local development experience by providing stable,
predictable tags to use for testing, without affecting the
architecture-specific tags required by the CI/release process.
@sheurich
Copy link
Contributor Author

I fixed the merge conflicts and multi-arch builds are working. Ready for a final review.

@sheurich sheurich requested a review from jprenken October 21, 2025 19:01
@jsha
Copy link
Contributor

jsha commented Oct 21, 2025

Hi @sheurich,

It's been a while since I looked at this PR and I realize it's grown a lot bigger since I last looked at it! Some of the things, like adding qemu for our release builds, looks like it will make our builds slower and more complex.

The original PR description said "This enables efficient local development and lays the foundation for future parallel multi-architecture CI builds." The current version shows us those multi-architecture CI builds, and I'm thinking the tradeoff for us having those in the main Boulder repo is probably not worth it.

Am I right in assuming you're deploying arm64 releases to prod? If so, would it be reasonable to build those in CI in your own fork of Boulder?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants