generated from Trust4AI/trust4ai-component-template
-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Vicente Cambrón Tocados edited this page Jul 29, 2024
·
3 revisions
EVA is a response evaluation tool specifically designed to assess outputs from large language models (LLMs). By classifying these responses as either affirmative or similar, EVA plays a critical role in the ongoing testing and development of LLMs. This wiki provides detailed insights and guidance on how to utilize EVA effectively as part of the Trust4AI research project.