From f271669767d821822258fb9c0e2deadae9b2cc13 Mon Sep 17 00:00:00 2001 From: Ella Charlaix Date: Fri, 3 May 2024 18:39:21 +0200 Subject: [PATCH] fix --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d4bbde1fa9..aedd7f8149 100644 --- a/README.md +++ b/README.md @@ -243,4 +243,4 @@ pip install -r requirements.txt ## Gaudi -To train your model on [Habana's Gaudi processor (HPU)](https://docs.habana.ai/en/latest/index.html), check out [`optimum-habana`](https://github.com/huggingface/optimum-habana). After training your model, feel free to submit it to the Intel [leaderboard](https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard) which is designed to evaluate, score, and rank open-source LLMs that have been pre-trained or fine-tuned on Intel Hardwares. Models submitted to the leaderboard will be evaluated on the Intel Developer Cloud. The evaluation platform consists of Gaudi Accelerators and Xeon CPUs running benchmarks from the Eleuther AI Language Model Evaluation Harness. +To train your model on [Habana's Gaudi processor (HPU)](https://docs.habana.ai/en/latest/index.html), check out [Optimum Habana](https://github.com/huggingface/optimum-habana) which provides a set of tools enabling easy model loading, training and inference on single- and multi-HPU settings for different downstream tasks. After training your model, feel free to submit it to the Intel [leaderboard](https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard) which is designed to evaluate, score, and rank open-source LLMs that have been pre-trained or fine-tuned on Intel Hardwares. Models submitted to the leaderboard will be evaluated on the Intel Developer Cloud. The evaluation platform consists of Gaudi Accelerators and Xeon CPUs running benchmarks from the Eleuther AI Language Model Evaluation Harness.