diff --git a/docker/README.md b/docker/README.md index 9232f6294a..f7b5e987f1 100644 --- a/docker/README.md +++ b/docker/README.md @@ -65,7 +65,7 @@ Creates a docker image with publicly available `torchserve` and `torch-model-arc - To create a GPU based image with cuda 10.2. Options are `cu92`, `cu101`, `cu102`, `cu111`, `cu113`, `cu116`, `cu117`, `cu118` for CUDA and `rocm60`, `rocm61`, `rocm62` for ROCm. - - GPU images are built with either NVIDIA CUDA base image amd AMD ROCm base image. If you want to use ONNX, please specify the base image as shown in the next section. + - GPU images are built with either NVIDIA CUDA base image or AMD ROCm base image. If you want to use ONNX, please specify the base image as shown in the next section. ```bash ./build_image.sh -g -cv cu117 @@ -183,6 +183,7 @@ Creates a docker image with `torchserve` and `torch-model-archiver` installed fr ./build_image.sh -bt dev -g [-cv cu121|cu118] -cpp ``` +- For more ROCm support (*experimental*), refer to [this documentation](../docs/hardware_support/amd_support.md). ## Start a container with a TorchServe image diff --git a/docs/hardware_support/amd_support.md b/docs/hardware_support/amd_support.md index bc2090917d..7029b2f36e 100644 --- a/docs/hardware_support/amd_support.md +++ b/docs/hardware_support/amd_support.md @@ -60,6 +60,33 @@ If you have 8 accelerators but only want TorchServe to see the last four of them > ⚠️ Setting both `CUDA_VISIBLE_DEVICES` and `HIP_VISIBLE_DEVICES` may cause unintended behaviour and should be avoided. > Doing so may cause an exception in the future. +## Docker¨ + +**In Development** + +`Dockerfile` and `build_image.sh` provides ROCm support for TorchServe. + +Building and running `dev-image`: + +```bash +./build_image.sh -bt dev -g -rv rocm62 -t torch-serve-dev-image-rocm +docker run -it --rm -device=/dev/kfd --device=/dev/dri torch-serve-dev-image-rocm bash +``` + +Building and running `ci-image`: + +```bash +./build_image.sh -bt ci -g -rv rocm62 -t torch-serve-ci-image-rocm +docker run -it --rm --device=/dev/kfd --device=/dev/dri torch-serve-ci-image-rocm +``` + +Building and running `production-image`: + +```bash +./build_image.sh -bt production -g -rv rocm62 -t torch-serve-production-image-rocm +docker run -it --rm --device=/dev/kfd --device=/dev/dri torch-serve-production-image-rocm +``` + ## Example Usage After installing TorchServe with the required dependencies for ROCm you should be ready to serve your model.