Skip to content

models qna gpt similarity eval

github-actions[bot] edited this page Jan 23, 2024 · 7 revisions

qna-gpt-similarity-eval

Overview

The "QnA GPT Similarity Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to achieve a high agreement with human evaluations compared to traditional mathematical measurements.

Inference samples

Inference type CLI VS Code Extension
Real time deploy-promptflow-model-cli-example deploy-promptflow-model-vscode-extension-example
Batch N/A N/A

Sample inputs and outputs (for real-time inference)

Sample input

{
    "inputs": {
        "question": "What feeds all the fixtures in low voltage tracks instead of each light having a line-to-low voltage transformer?",
        "ground_truth": "Master transformer.",
        "answer": "The main transformer is the object that feeds all the fixtures in low voltage tracks."
    }
}

Sample output

{
    "outputs": {
        "gpt_similarity": 4
    }
}

Version: 3

View in Studio: https://ml.azure.com/registries/azureml/models/qna-gpt-similarity-eval/version/3

Properties

is-promptflow: True

azureml.promptflow.section: gallery

azureml.promptflow.type: evaluate

azureml.promptflow.name: QnA GPT Similarity Evaluation

azureml.promptflow.description: Compute the similarity of the answer base on the question and ground truth using llm.

inference-min-sku-spec: 2|0|14|28

inference-recommended-sku: Standard_DS3_v2

Clone this wiki locally