Skip to content

Commit

Permalink
Update readme, inference submission cleanups (#117)
Browse files Browse the repository at this point in the history
* Support docker for mixtral

* Added option to automatically submit mlperf inference results while running the submission checker

* Cleanups for get-mlperf-inference-src

* Create CONTRIBUTORS.md

* Update README.md
  • Loading branch information
arjunsuresh authored Jan 8, 2025
1 parent 976c927 commit 166e8e5
Show file tree
Hide file tree
Showing 7 changed files with 120 additions and 13 deletions.
43 changes: 43 additions & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Contributors

Thank you for your interest in contributing to **MLPerf Automations**! We welcome contributions that help improve the project and expand its functionality.

---

## How to Become a Contributor

We value all contributions, whether they are code, documentation, bug reports, or feature suggestions. If you contribute **more than 50 lines of code** (including tests and documentation), you will be officially recognized as a project contributor.

**Note:** Trivial contributions, such as minor typo fixes or small formatting changes, will not count toward the 50-line threshold.

To contribute:
1. **Fork** the repository.
2. **Create** a new branch for your feature or bug fix.
3. **Submit** a pull request (PR) describing your changes.
Please see [here](CONTRIBUTING.md) for further guidelines for official contribution to any MLCommons repository.

---

## Contributor Recognition

Once your contribution exceeds 50 lines of code (in total), we will:
- Add your name to this `CONTRIBUTORS.md` file.
- Highlight your contribution in the next release notes.
- Grant you access to suggest and vote on new features.

---

## Current Contributors

- **Grigori Fursin** - *Initial Development, CLI workflow support via CMind, Added core automation features*
- **Arjun Suresh** - *Initial Development, Added core automation features*
- **Anandhu Sooraj** - *Added multiple CM scripts for MLPerf Inference*
- **Thomaz Zhu** - *Added CPP implementation for MLPerf Inference Onnxruntime*
- **Sahil Avaran** - *Adding logging support in MLPerf script automation*
- **[Your Name Here]** - This could be you! 🎉

---

We believe in collaborative growth, and every contribution makes a difference. Feel free to reach out by opening an issue if you have any questions or ideas.

Happy Coding! 🚀
62 changes: 57 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,65 @@
# MLPerf Automations and Scripts

This repository contains the automations and scripts used to run MLPerf benchmarks, primarily focusing on MLPerf inference benchmarks. The automations used here are largely based on and extended from the [Collective Mind script automations](https://github.com/mlcommons/cm4mlops/tree/main/automation/script).
[![License](https://img.shields.io/badge/License-Apache%202.0-green)](LICENSE.md)
[![Downloads](https://static.pepy.tech/badge/cm4mlops)](https://pepy.tech/project/cm4mlops)
[![CM Script Automation Test](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-cm-script-features.yml/badge.svg)](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-cm-script-features.yml)
[![MLPerf Inference ABTF POC Test](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-mlperf-inference-abtf-poc.yml/badge.svg)](https://github.com/mlcommons/mlperf-automations/actions/workflows/test-mlperf-inference-abtf-poc.yml)

Welcome to the **MLPerf Automations and Scripts** repository! This repository provides tools, automations, and scripts to facilitate running MLPerf benchmarks, with a primary focus on **MLPerf Inference benchmarks**.

## Collective Mind (CM) Automations
The automations build upon and extend the powerful [Collective Mind (CM) script automations](https://github.com/mlcommons/cm4mlops/tree/main/automation/script) to streamline benchmarking and workflow processes.

**CM (Collective Mind)** is a Python package with a CLI and API designed to create and manage automations. Two key automations developed using CM are **Script** and **Cache**, which streamline ML workflows, including managing Docker runs.
---

## 🚀 Key Features
- **Automated Benchmarking** – Simplifies running MLPerf Inference benchmarks with minimal manual intervention.
- **Modular and Extensible** – Easily extend the scripts to support additional benchmarks and configurations.
- **Seamless Integration** – Compatible with Docker, cloud environments, and local machines.
- **Collective Mind (CM) Integration** – Utilizes the CM framework to enhance reproducibility and automation.

## License
---

[Apache 2.0](LICENSE.md)
## 🧰 Collective Mind (CM) Automations

The **Collective Mind (CM)** framework is a Python-based package offering both CLI and API support for creating and managing automations. CM automations enhance ML workflows by simplifying complex tasks such as Docker container management and caching.

### Core Automations
- **Script Automation** – Automates script execution across different environments.
- **Cache Management** – Manages reusable cached results to accelerate workflow processes.

Learn more about CM in the [CM4MLOps documentation](https://github.com/mlcommons/cm4mlops).

---

## 🤝 Contributing
We welcome contributions from the community! To contribute:
1. Submit pull requests (PRs) to the **`dev`** branch.
2. Review our [CONTRIBUTORS.md](here) for guidelines and best practices.
3. Explore more about MLPerf Inference automation in the official [MLPerf Inference Documentation](https://docs.mlcommons.org/inference/).

Your contributions help drive the project forward!

---

## 📰 News
Stay tuned for upcoming updates and announcements.

---

## 📄 License
This project is licensed under the [Apache 2.0 License](LICENSE.md).

---

## 💡 Acknowledgments and Funding
This project is made possible through the generous support of:
- [OctoML](https://octoml.ai)
- [cKnowledge.org](https://cKnowledge.org)
- [cTuning Foundation](https://cTuning.org)
- [MLCommons](https://mlcommons.org)

We appreciate their contributions and sponsorship!

---

Thank you for your interest and support in MLPerf Automations and Scripts!
1 change: 1 addition & 0 deletions script/generate-mlperf-inference-submission/_cm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ input_mapping:
device: CM_MLPERF_DEVICE
division: CM_MLPERF_SUBMISSION_DIVISION
duplicate: CM_MLPERF_DUPLICATE_SCENARIO_RESULTS
extra_checker_args: CM_MLPERF_SUBMISSION_CHECKER_EXTRA_ARG
hw_name: CM_HW_NAME
hw_notes_extra: CM_MLPERF_SUT_HW_NOTES_EXTRA
infer_scenario_results: CM_MLPERF_DUPLICATE_SCENARIO_RESULTS
Expand Down
2 changes: 2 additions & 0 deletions script/get-ml-model-mixtral/_cm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ category: AI/ML models
env:
CM_ML_MODEL_DATASET: ''
CM_ML_MODEL_WEIGHT_TRANSFORMATIONS: 'no'
docker:
real_run: False
input_mapping:
checkpoint: MIXTRAL_CHECKPOINT_PATH
new_env_keys:
Expand Down
14 changes: 7 additions & 7 deletions script/get-mlperf-inference-src/_cm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,6 @@ prehook_deps:
_submodules.:
- CM_GIT_SUBMODULES
print_env_at_the_end_disabled:
CM_MLPERF_INFERENCE_CONF_PATH: Path to the MLPerf inference benchmark configuration
file
CM_MLPERF_INFERENCE_SOURCE: Path to MLPerf inference benchmark sources
tags:
- get
Expand Down Expand Up @@ -154,31 +152,33 @@ versions:
CM_MLPERF_LAST_RELEASE: v2.1
CM_TMP_GIT_CHECKOUT: v2.1
r3.0:
adr:
ad:
inference-git-repo:
tags: _tag.v3.0
env:
CM_MLPERF_LAST_RELEASE: v3.0
CM_TMP_GIT_CHECKOUT: ''
r3.1:
adr:
ad:
inference-git-repo:
tags: _tag.v3.1
env:
CM_MLPERF_LAST_RELEASE: v3.1
CM_TMP_GIT_CHECKOUT: ''
CM_GIT_CHECKOUT_TAG: 'v3.1'
r4.0:
adr:
ad:
inference-git-repo:
tags: _tag.v4.0
env:
CM_MLPERF_LAST_RELEASE: v4.0
CM_GIT_CHECKOUT_TAG: 'v4.0'
r4.1:
adr:
ad:
inference-git-repo:
tags: _tag.v4.1
env:
CM_MLPERF_LAST_RELEASE: v4.1
CM_GIT_CHECKOUT_TAG: 'v4.1'
r5.0:
env:
CM_MLPERF_LAST_RELEASE: v5.0
Expand Down
3 changes: 2 additions & 1 deletion script/get-mlperf-inference-src/customize.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,8 @@ def preprocess(i):
# if not try to assign the values specified in version parameters,
# if version parameters does not have the value to a parameter, set the
# default one
if env.get('CM_GIT_CHECKOUT', '') == '':
if env.get('CM_GIT_CHECKOUT', '') == '' and env.get(
'CM_GIT_CHECKOUT_TAG', '') == '':
if env.get('CM_TMP_GIT_CHECKOUT', '') != '':
env["CM_GIT_CHECKOUT"] = env["CM_TMP_GIT_CHECKOUT"]
else:
Expand Down
8 changes: 8 additions & 0 deletions script/run-mlperf-inference-submission-checker/_cm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ deps:
tags: preprocess,mlperf,inference,submission
input_mapping:
extra_args: CM_MLPERF_SUBMISSION_CHECKER_EXTRA_ARGS
extra_checker_args: CM_MLPERF_SUBMISSION_CHECKER_EXTRA_ARGS
extra_model_benchmark_map: CM_MLPERF_EXTRA_MODEL_MAPPING
input: CM_MLPERF_INFERENCE_SUBMISSION_DIR
power: CM_MLPERF_POWER
Expand All @@ -50,6 +51,7 @@ input_mapping:
src_version: CM_MLPERF_SUBMISSION_CHECKER_VERSION
submission_dir: CM_MLPERF_INFERENCE_SUBMISSION_DIR
submitter: CM_MLPERF_SUBMITTER
submitter_id: CM_MLPERF_SUBMITTER_ID
tar: CM_TAR_SUBMISSION_DIR
post_deps:
- enable_if_env:
Expand All @@ -66,6 +68,12 @@ post_deps:
CM_TAR_SUBMISSION_DIR:
- 'yes'
tags: run,tar
- enable_if_env:
CM_SUBMITTER_ID:
- 'yes'
tags: submit,mlperf,results,_inference
env:
CM_MLPERF_SUBMISSION_FILE: <<<MLPERF_INFERENCE_SUBMISSION_TAR_FILE>>>
tags:
- run
- mlc
Expand Down

0 comments on commit 166e8e5

Please sign in to comment.