Skip to content

models prompt flow documentation

github-actions[bot] edited this page Sep 21, 2024 · 15 revisions

prompt flow

Models in this category


  • analyze-conversations

    The "Analyze Conversations" is a standard model that utilizes Azure AI Language to perform various analyzes on text-based conversations. Azure AI language hosts pre-trained, task-oriented, and optimized conversation focused ML models, including various summarization aspects, PII entity extraction...

  • analyze-documents

    The "Analyze Documents" is a standard model that utilizes Azure AI Language to perform various analyzes on text-based documents. Azure AI language hosts pre-trained, task-oriented, and optimized document focused ML models, such as summarization, sentiment analysis, entity extraction, etc.

...

  • ask-wikipedia

    The "Ask Wikipedia" is a Q&A model that employs GPT3.5 to answer questions using information sourced from Wikipedia, ensuring more grounded responses. This process involves identifying the relevant Wikipedia link and extracting its contents. These contents are then used as an augmented prompt, en...

  • bring-your-own-data-chat-qna

    The "Bring Your Own Data Chat QnA" is a pre-trained chat model, enhanced by GPT3.5, that leverages your personally indexed data and chat history to deliver more concrete and relevant answers. It involves processing the raw query through an embedding procedure, followed by a "Vector Search" to pin...

  • bring-your-own-data-qna

    The "Bring your own data QnA" is a pre-trained Q&A model, enhanced by GPT3.5, that leverages your personally indexed data to deliver more concrete and relevant answers. It involves processing the raw query through an embedding procedure, followed by a "Vector Search" to pinpoint the most pertinen...

  • chat-quality-safety-eval

    The chat quality and safety evaluation flow will evaluate the chat systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your LLM responses . Utilizing GPT model to assist with measurements aims to achieve a high agreement with human evaluatio...

  • chat-with-wikipedia

    The "Chat with Wikipedia" is a pre-trained chat model with GPT3.5: it combines conversation history and information from Wikipedia to make the answer more grounded. It involves finding a relevant Wikipedia link and getting page contents for the question. It can remember previous interactions and ...

  • classification-accuracy-eval

    The "Classification Accuracy Evaluation" is a model designed to assess the effectiveness of a data classification system. It involves matching each prediction against the ground truth, subsequently assigning a "Correct" or "Incorrect" score. The cumulative results are then leveraged to generate p...

  • count-cars

    The "Count Cars" is a model designed for accurately quantifying the number of specific vehicles – particularly red cars – in given images. Utilizing the advanced capabilities of Azure OpenAI GPT-4 Turbo with Vision, this system meticulously analyzes each image, identifies and counts red cars, out...

  • detect-defects

    The "Detect Defects" is a model designed for meticulous examination of images. It operates by employing GPT-4 Turbo with Vision to compare a test image against a reference image. Each analysis focuses on identifying variances or anomalies, classifying them as defects. This methodical comparison e...

  • how-to-use-functions-with-GPT-chat-API

    The "Use Functions with Chat Models" is a chat model illustrates how to employ the LLM tool's Chat API with external functions, thereby expanding the capabilities of GPT models. The Chat Completion API includes an optional 'functions' parameter, which can be used to stipulate function specificati...

  • multi-index-rerank-qna

    This "Multi-Source Rerank Q&A" demonstrates Q&A application, enabled by reranking data from multiple sources and powered by GPT. It utilizes indexed files and the rerank tool from Azure Machine Learning to provide grounded answers. You can ask a wide range of questions and receive responses based...

  • playground-ayod-rag

    This flow template is an advanced RAG flow modeled on the implementation of Azure AI Playground - on Your Data. The flow consists of tools that rewrites user query input into one or more queries based on chat history context using LLM, retrieves data for rewritten queries from the data index and ...

  • qna-ada-similarity-eval

    The "QnA Ada Similarity Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to...

  • qna-coherence-eval

    The "QnA Coherence Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to achi...

  • qna-f1-score-eval

    The "QnA F1 Score Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems using f1 score based on the word counts in predicted answer and ground truth.

Inference samples

Inference type CLI VS Code Extension
Real time <a href="https://microsoft.github.io...
  • qna-fluency-eval

    The "QnA Fluency Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to achiev...

  • qna-gpt-similarity-eval

    The "QnA GPT Similarity Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to...

  • qna-groundedness-eval

    The "QnA Groundedness Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to a...

  • qna-non-rag-metrics-eval

    The Q&A evaluation flow will evaluate the Q&A systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with human evaluations compa...

  • qna-quality-safety-eval

    The Q&A quality and safety evaluation flow will evaluate the Q&A systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with huma...

  • qna-rag-metrics-eval

    The Q&A RAG (Retrieval Augmented Generation) evaluation flow will evaluate the Q&A RAG systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses . Utilizing GPT model to assist with measurements aims to achieve a high agreement with...

  • qna-relevance-eval

    The "QnA Relevance Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to achi...

  • qna-with-your-own-data-using-faiss-index

    The "QnA with Your Own Data Using Faiss Index" is a Q&A model with GPT3.5 using information from vector search to make the answer more grounded. It involves embedding user's question with LLM, and then using Faiss Index Lookup to find relevant documents based on vectors. By utilizing vector searc...

  • rai-eval-ui-dag-flow

    The Q&A quality and safety evaluation flow will evaluate the Q&A systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with huma...

  • rai-qna-quality-safety-eval

    The Q&A quality and safety evaluation flow will evaluate the Q&A systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with huma...

  • rerank-qna

    This "Index Data Rerank Q&A" demonstrates Q&A application, enabled by reranking data from vector index stores and powered by GPT. It utilizes index stores and the rerank tool from Azure Machine Learning to provide grounded answers. You can ask a wide range of questions and receive responses based...

  • template-chat-flow

    The "Template Chat Flow" is a chat model using GPT3.5 that generates the next message based on the conversation history and the latest chat content.

Inference samples

Inference type CLI VS Code Extension
Real time <a href="https://microsoft.github.io/promptflow/how-to-guides/dep...
  • template-eval-flow

    The "Template Evaluation Flow" is a evaluate model to measure how well the output matches the expected criteria and goals.

Inference samples

Inference type CLI VS Code Extension
Real time <a href="https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/index.html" tar...
  • template-standard-flow

    The "Template Standard Flow" is a model using GPT3.5 to generate a joke based on user input.

Inference samples

Inference type CLI VS Code Extension
Real time deploy-promptflow...
  • web-classification

    The "Web Classification" is a model demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.

Inference samples

Inference type CLI VS Code Extension
Real...
Clone this wiki locally