-
Notifications
You must be signed in to change notification settings - Fork 124
models qna f1 score eval
github-actions[bot] edited this page Jan 23, 2024
·
7 revisions
The "QnA F1 Score Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems using f1 score based on the word counts in predicted answer and ground truth.
Inference type | CLI | VS Code Extension |
---|---|---|
Real time | deploy-promptflow-model-cli-example | deploy-promptflow-model-vscode-extension-example |
Batch | N/A | N/A |
{
"inputs": {
"ground_truth": "Master transformer.",
"answer": "The main transformer is the object that feeds all the fixtures in low voltage tracks."
}
}
{
"outputs": {
"f1_score": "0.14285714285714285"
}
}
Version: 3
View in Studio: https://ml.azure.com/registries/azureml/models/qna-f1-score-eval/version/3
is-promptflow: True
azureml.promptflow.section: gallery
azureml.promptflow.type: evaluate
azureml.promptflow.name: QnA F1 Score Evaluation
azureml.promptflow.description: Compute the F1 Score based on words in answer and ground truth.
inference-min-sku-spec: 2|0|14|28
inference-recommended-sku: Standard_DS3_v2