© 2023, Anyscale Inc. All Rights Reserved
Machine learning (ML) pipelines involve a variety of computationally intensive stages. As state-of-the-art models and systems demand more compute, there is an urgent need for adaptable tools to scale ML workloads. This idea drove the creation of Ray—an open source, distributed ML compute framework that not only powers systems like ChatGPT but also pushes theoretical computing benchmarks. Ray AIR is especially useful for parallelizing ML workloads such as pre-processing images, model training and finetuning, and batch inference. In this tutorial, participants will learn about AIR’s composable APIs through hands-on coding exercises.
- Intermediate-level Python and ML researchers and developers.
- Those interested in scaling ML workloads up to full laptop capacity to a cluster.
- Familiarity with basic ML concepts and workflows.
- No prior experience with Ray or distributed computing.
- (Optional) Overview of Ray notebook as background material. |
You can learn and get more involved with the Ray community of developers and researchers:
- Ray documentation
- Official Ray site Browse the ecosystem and use this site as a hub to get the information that you need to get going and building with Ray.
- Join the community on Slack Find friends to discuss your new learnings in our Slack space.
- Use the discussion board Ask questions, follow topics, and view announcements on this community forum.
- Join a meetup group Tune in on meet-ups to listen to compelling talks, get to know other users, and meet the team behind Ray.
- Open an issue Ray is constantly evolving to improve developer experience. Submit feature requests, bug-reports, and get help via GitHub issues.