diff --git a/prez/prez.html b/prez/prez.html
index 301a2ed..eeeb635 100644
--- a/prez/prez.html
+++ b/prez/prez.html
@@ -1397,7 +1397,7 @@
Experimentation Results
Report used: the 2019 Evaluation of UNHCR’s data use and information management approaches with two test summary #1 & #2.
Models Tested: Small large language model that can run out of a strong laptop: Command-r & Mixtral for the generation, bge-large-en-v1.5 for the embeddings
Integration & Documentation: Use of LangChain for the orchestration. Code shared and documented in Github
-Human Validation: Ground truthing with labelStud.io.
+Human Validation: Ground truthing with labelStud.io.
Evaluation: Assess accuracy, relevance, and efficiency using RAGAS (Retrieval Augmented Generation Assessment).
diff --git a/prez/prez.qmd b/prez/prez.qmd
index a1d8300..761c521 100644
--- a/prez/prez.qmd
+++ b/prez/prez.qmd
@@ -222,7 +222,7 @@ See [full article here](https://edouard-legoupil.github.io/rag_extraction/){targ
3. **Integration & Documentation**: Use of [LangChain](https://python.langchain.com/v0.1/docs/use_cases/question_answering/){target="_blank"} for the orchestration. Code shared and documented in [Github](https://github.com/Edouard-Legoupil/rag_extraction/){target="_blank"}
-4. **Human Validation**: Ground truthing with [labelStud.io](https://labelstud.io/){target="_blank"}.
+4. **Human Validation**: Ground truthing with [labelStud.io](https://labelstud.io/templates/generative-llm-ranker){target="_blank"}.
5. **Evaluation**: Assess accuracy, relevance, and efficiency using [RAGAS (Retrieval Augmented Generation Assessment)](https://docs.ragas.io/en/stable/){target="_blank"}.
:::