This is a repository containing utilities for running Ray on Cloud TPUs. For more information about TPUs, please check out the official Google Cloud documentation here.
TPUs are different from other accelerators like GPUs because they are "pod-centric". Scheduling jobs and workloads on TPUs require awareness of slice topologies and other factors. This package introduces higher level utilities that simplify running Ray workloads on TPU pod slices as if they were single nodes.
Run the following command to install the package:
pip install ray-tpu
Check out the tutorials section for more details.