-
Notifications
You must be signed in to change notification settings - Fork 124
models qna f1 score eval
github-actions[bot] edited this page Oct 23, 2023
·
7 revisions
The "QnA F1 Score Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems using f1 score based on the word counts in predicted answer and ground truth.
Inference type | Python sample (Notebook) | CLI with YAML |
---|---|---|
Real time | deploy-promptflow-model-python-example | deploy-promptflow-model-cli-example |
Batch | N/A | N/A |
{
"inputs": {
"ground_truth": "Master transformer.",
"answer": "The main transformer is the object that feeds all the fixtures in low voltage tracks."
}
}
{
"outputs": {
"f1_score": "0.14285714285714285"
}
}
Version: 2
View in Studio: https://ml.azure.com/registries/azureml/models/qna-f1-score-eval/version/2
is-promptflow: True
azureml.promptflow.section: gallery
azureml.promptflow.type: evaluate
azureml.promptflow.name: QnA F1 Score Evaluation
azureml.promptflow.description: Compute the F1 Score based on words in answer and ground truth.
inference-min-sku-spec: 2|0|14|28
inference-recommended-sku: Standard_DS3_v2