Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] Add remaining Ubuntu 24.04 Dockerfile #16457

Merged
merged 5 commits into from
Dec 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .github/workflows/sycl-containers.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,14 @@ jobs:
file: ubuntu2404_base
tag: latest
build_args: ""
- name: Build Ubuntu Docker image
- name: Build Ubuntu 22.04 Docker image
file: ubuntu2204_build
tag: latest
build_args: ""
- name: Build Ubuntu 24.04 Docker image
file: ubuntu2404_build
tag: latest
build_args: ""
- name: Build Ubuntu 24.04 oneAPI Docker image
file: ubuntu2404_build_oneapi
tag: latest
Expand Down
42 changes: 42 additions & 0 deletions devops/containers/ubuntu2404_build.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
FROM nvidia/cuda:12.6.3-devel-ubuntu24.04
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Ubuntu 22 docker container uses CUDA 12.1.0 (https://github.com/intel/llvm/blob/sycl/devops/containers/ubuntu2204_build.Dockerfile#L1). Are we good to use CUDA 12.6.3 with DPC++?
Similarly, Ubuntu 22 container uses ROCM 6.1.1 while Ubuntu 24 uses ROCM 6.3.

Copy link
Contributor Author

@sarnex sarnex Dec 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. Based on the Codeplay doc (who owns the AMD/NVidia support), they say CUDA12+ works but isn't the version they are currently testing on:

image

And for ROCM:

image

Here it doesn't list 6.3 specifically but says a wide variety should work. Also one of the runners is already using an unsupported GPU so it's not like we are following this doc today.

So IMO based on the doc I think it's fine to try to use the newer API versions but we can revert if there's some problem.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable to me. @npmiller @JackAKirk FYI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to merge this for now. This PR doesn't make us use the new versions in testing yet, it just adds the Docker images. There will be a subsequent PR to move the CI over to use those new images. Happy to address any feedback from the Codeplay team.


ENV DEBIAN_FRONTEND=noninteractive

USER root

# Install SYCL prerequisites
COPY scripts/install_build_tools.sh /install.sh
RUN /install.sh

SHELL ["/bin/bash", "-ec"]

# Make the directory if it doesn't exist yet.
# This location is recommended by the distribution maintainers.
RUN mkdir --parents --mode=0755 /etc/apt/keyrings
# Download the key, convert the signing-key to a full
# keyring required by apt and store in the keyring directory
RUN wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | \
gpg --dearmor | tee /etc/apt/keyrings/rocm.gpg > /dev/null && \
# Add rocm repo
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/amdgpu/6.3/ubuntu noble main" \
| tee /etc/apt/sources.list.d/amdgpu.list && \
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/6.3 noble main" \
| tee --append /etc/apt/sources.list.d/rocm.list && \
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' \
| tee /etc/apt/preferences.d/rocm-pin-600 && \
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' \
| tee /etc/apt/preferences.d/rocm-pin-600
# Install the ROCM kernel driver
RUN apt update && apt install -yqq rocm-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

COPY scripts/create-sycl-user.sh /user-setup.sh
RUN /user-setup.sh

COPY scripts/docker_entrypoint.sh /docker_entrypoint.sh

USER sycl

ENTRYPOINT ["/docker_entrypoint.sh"]

36 changes: 23 additions & 13 deletions sycl/doc/developer/DockerBKMs.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,20 +36,33 @@ identical for Docker and Podman. Choose whatever is available on your system.

The following containers are publicly available for DPC++ compiler development:

- `ghcr.io/intel/llvm/ubuntu2204_base`: contains basic Ubuntu 22.04 environment
setup for building DPC++ compiler from source.
- `ghcr.io/intel/llvm/ubuntu2404_base`: contains basic Ubuntu 24.04 environment
### Ubuntu 22.04-based images

- `ghcr.io/intel/llvm/ubuntu2204_base`: contains basic environment
setup for building DPC++ compiler from source.
- `ghcr.io/intel/llvm/ubuntu2204_intel_drivers`: contains everything from the
Ubuntu 22.04 base container + pre-installed Intel drivers.
base container + pre-installed Intel drivers.
The image comes in two flavors/tags:
* `latest`: Intel drivers are downloaded from release/tag and saved in
dependencies.json. The drivers are tested/validated everytime we upgrade
the driver.
* `alldeps`: Includes the same Intel drivers as `latest`, as well as the
development kits for NVidia/AMD from the `ubuntu2204_build` container.
- `ghcr.io/intel/llvm/ubuntu2204_build`: has development kits installed for
NVidia/AMD and can be used for building DPC++
compiler from source with all backends enabled or for end-to-end testing
with HIP/CUDA on machines with corresponding GPUs available.
- `ghcr.io/intel/llvm/sycl_ubuntu2204_nightly`: contains the latest successfully
built nightly build of DPC++ compiler. The image comes in three flavors:
with pre-installed Intel drivers (`latest`), without them (`no-drivers`) and
with development kits installed (`build`).

### Ubuntu 24.04-based images

- `ghcr.io/intel/llvm/ubuntu2404_base`: contains basic environment
setup for building DPC++ compiler from source.
- `ghcr.io/intel/llvm/ubuntu2404_intel_drivers`: contains everything from the
Ubuntu 24.04 base container + pre-installed Intel drivers.
base container + pre-installed Intel drivers.
The image comes in three flavors/tags:
* `latest`: Intel drivers are downloaded from release/tag and saved in
dependencies.json. The drivers are tested/validated everytime we upgrade
Expand All @@ -58,14 +71,11 @@ The following containers are publicly available for DPC++ compiler development:
other drivers are downloaded from release/tag and saved in dependencies.json.
* `unstable`: Intel drivers are downloaded from release/latest.
The drivers are installed as it is, not tested or validated.
- `ghcr.io/intel/llvm/ubuntu2204_build`: has development kits installed for
NVidia/AMD and can be used for building DPC++ compiler from source with all
backends enabled or for end-to-end testing with HIP/CUDA on machines with
corresponding GPUs available.
- `ghcr.io/intel/llvm/sycl_ubuntu2204_nightly`: contains the latest successfully
built nightly build of DPC++ compiler. The image comes in three flavors:
with pre-installed Intel drivers (`latest`), without them (`no-drivers`) and
with development kits installed (`build`).
- `ghcr.io/intel/llvm/ubuntu2404_build`: has development kits installed for
NVidia/AMD and can be used for building DPC++
compiler from source with all backends enabled or for end-to-end testing
with HIP/CUDA on machines with corresponding GPUs available.


## Running Docker container interactively

Expand Down
Loading