-
Notifications
You must be signed in to change notification settings - Fork 125
models documentation
ALLaM is a series of powerful language models designed to advance Arabic Language Technology (ALT) developed by the National Center for Artificial Intelligence (NCAI) at the Saudi Data and AI Authority (SDAIA). ALLaM-2-7b-instruct
is traine...
-
The "Analyze Conversations" is a standard model that utilizes Azure AI Language to perform various analyzes on text-based conversations. Azure AI language hosts pre-trained, task-oriented, and optimized conversation focused ML models, including various summarization aspects, PII entity extraction...
-
The "Analyze Documents" is a standard model that utilizes Azure AI Language to perform various analyzes on text-based documents. Azure AI language hosts pre-trained, task-oriented, and optimized document focused ML models, such as summarization, sentiment analysis, entity extraction, etc.
-
The "Ask Wikipedia" is a Q&A model that employs GPT3.5 to answer questions using information sourced from Wikipedia, ensuring more grounded responses. This process involves identifying the relevant Wikipedia link and extracting its contents. These contents are then used as an augmented prompt, en...
-
Automated Machine Learning, or AutoML, is a process that automates the repetitive and time-consuming tasks involved in developing machine learning models. This helps data scientists, analysts, and developers to create models more efficiently and with higher quality, resulting in increased product...
-
AutoML-Image-Instance-Segmentation
Automated Machine Learning, or AutoML, is a process that automates the repetitive and time-consuming tasks involved in developing machine learning models. This helps data scientists, analysts, and developers to create models more efficiently and with higher quality, resulting in increased product...
-
Automated Machine Learning, or AutoML, is a process that automates the repetitive and time-consuming tasks involved in developing machine learning models. This helps data scientists, analysts, and developers to create models more efficiently and with higher quality, resulting in increased product...
-
AutoML-Named-Entity-Recognition
Automated Machine Learning, or AutoML, is a process that automates the repetitive and time-consuming tasks involved in developing machine learning models. This helps data scientists, analysts, and developers to create models more efficiently and with higher quality, resulting in increased product...
-
Automated Machine Learning, or AutoML, is a process that automates the repetitive and time-consuming tasks involved in developing machine learning models. This helps data scientists, analysts, and developers to create models more efficiently and with higher quality, resulting in increased product...
-
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inpu...
-
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate input...
-
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inpu...
-
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inpu...
-
BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
BiomedCLIP is a biomedical vision-language foundation model that is pretrained on PMC-15M, a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning. It uses PubMedBERT as the text encoder and Vision Transformer as the i...
-
| | | | -- | -- | | Score range | Float [0-1]: higher means better quality. | | What is this metric? | BLEU (Bilingual Evaluation Understudy) score is commonly used in natural language processing (NLP) and machine translation. It measures how closely the generated text matches the reference text....
-
The "Bring Your Own Data Chat QnA" is a pre-trained chat model, enhanced by GPT3.5, that leverages your personally indexed data and chat history to deliver more concrete and relevant answers. It involves processing the raw query through an embedding procedure, followed by a "Vector Search" to pin...
-
The "Bring your own data QnA" is a pre-trained Q&A model, enhanced by GPT3.5, that leverages your personally indexed data to deliver more concrete and relevant answers. It involves processing the raw query through an embedding procedure, followed by a "Vector Search" to pinpoint the most pertinen...
-
bytetrack_yolox_x_crowdhuman_mot17-private-half
bytetrack_yolox_x_crowdhuman_mot17-private-half
model is from OpenMMLab's MMTracking library. Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos. Most methods obtai... -
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
OSCAR or Open...
-
The chat quality and safety evaluation flow will evaluate the chat systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your LLM responses . Utilizing GPT model to assist with measurements aims to achieve a high agreement with human evaluatio...
-
The "Chat with Wikipedia" is a pre-trained chat model with GPT3.5: it combines conversation history and information from Wikipedia to make the answer more grounded. It involves finding a relevant Wikipedia link and getting page contents for the question. It can remember previous interactions and ...
-
The "Classification Accuracy Evaluation" is a model designed to assess the effectiveness of a data classification system. It involves matching each prediction against the ground truth, subsequently assigning a "Correct" or "Incorrect" score. The cumulative results are then leveraged to generate p...
-
| | | | -- | -- | | Score range | Integer [1-5]: 1 is the lowest quality and 5 is the highest quality. | | What is this metric? | Measures how well the language model can produce output that flows smoothly, reads naturally, and resembles human-like language. | | How does it work? | The coherence...
-
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. The Stable-Diffusion-v1-4 checkpoint was initialized with the weights of the Stable-Diffusion-v1-2 checkpoint and subsequently fine-tuned on 225k steps at resolution 5...
-
The "Count Cars" is a model designed for accurately quantifying the number of specific vehicles – particularly red cars – in given images. Utilizing the advanced capabilities of Azure OpenAI GPT-4 Turbo with Vision, this system meticulously analyzes each image, identifies and counts red cars, out...
-
The CXRReportGen model utilizes a multimodal architecture, integrating a BiomedCLIP image encoder with a Phi-3-Mini text encoder to help an application interpret complex medical imaging studies of chest X-rays. CXRReportGen follows the same framework as **[MAIRA-2](https://www.micros...
-
Databricks'
dolly-v2-12b
, an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. Based onpythia-12b
, Dolly is trained on ~15k instruction/response fine tuning records [databricks-dolly-15k
](https://github.com/d... -
The Model Card for DeciCoder 1B provides details about a 1 billion parameter decoder-only code completion model developed by Deci. The model was trained on Python, Java, and JavaScript subsets of Starcoder Training Dataset and uses Grouped Query Attention with a context window of 2048 tokens. It ...
-
DeciDiffusion
1.0 is an 820 million parameter latent diffusion model designed for text-to-image conversion. Trained initially on the LAION-v2 dataset and fine-tuned on the LAION-ART dataset, the model's training involved advanced techniques to improve speed, training performance, and achieve su... -
DeciLM-7B is a decoder-only text generation model with 7.04 billion parameters, released by Deci under the Apache 2.0 license. It is the top-performing 7B base language model on the Open LLM Leaderboard and uses variable Grouped-Query Attention (GQA) to achieve a superior balance between accuracy...
-
DeciLM-7B-instruct is a model for short-form instruction following, built by LoRA fine-tuning on the SlimOrca dataset. It is a derivative of the recently released DeciLM-7B language model, a pre-trained, high-efficiency generative text model with 7 billion parameters. DeciLM-7B-instruct is one of...
-
seed=42
batch_size = 12
n_epochs = 4
base_LM_model = "microsoft/MiniLM-L12-H384-uncased"
max_seq_len = 384
learning_rate = 4e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
grad_acc_steps=4
-
This is the roberta-base model, fine-tuned using the SQuAD2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
-
deformable_detr_twostage_refine_r50_16x2_50e_coco
deformable_detr_twostage_refine_r50_16x2_50e_coco
model is from OpenMMLab's MMDetection library. This model is reported to obtain <a href="https://github.com/open-mmlab/mmdetection/blob/e9cae2d0787cd5c2fc6165a6... -
The "Detect Defects" is a model designed for meticulous examination of images. It operates by employing GPT-4 Turbo with Vision to compare a test image against a reference image. Each analysis focuses on identifying variances or anomalies, classifying them as defects. This methodical comparison e...
-
DistilBERT, a transformers model, is designed to be smaller and quicker than BERT. It underwent pretraining on the same dataset in a self-supervised manner, utilizing the BERT base model as a reference. This entails training solely on raw texts, without human annotation, thus enabling the utiliza...
-
distilbert-base-cased-distilled-squad
The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://...
-
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lot...
-
distilbert-base-uncased-distilled-squad
DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxi...
-
distilbert-base-uncased-finetuned-sst-2-english
DistilBERT base uncased finetuned SST-2 model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2. This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy ...
-
DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using knowledge distillation and was designed to be a faster, li...
-
distilroberta-base is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT. The code for the distillation process can be found [here](https://github.com/hugg...
-
Election Critical Information (ECI) refers to any content related to elections, including voting processes, candidate information, and election results. The ECI evaluator uses the Azure AI Safety Evaluation service to assess the generated responses for ECI without a disclaimer.
#...
-
| | | | -- | -- | | Score range | Float [0-1]: higher means better quality. | | What is this metric? | F1 score measures the similarity by shared tokens between the generated text and the ground truth, focusing on both precision and recall. | | How does it work? | The F1-score computes the ratio...
-
BART is a transformer model that combines a bidirectional encoder similar to BERT with an autoregressive decoder akin to GPT. It is trained using two main techniques: (1) corrupting text with a chosen noising function, and (2) training a model to reconstruct the original text.
When fine-tuned fo...
-
facebook-deit-base-patch16-224
DeiT (Data-efficient image Transformers) is an image transformer that do not require very large amounts of data for training. This is achieved through a novel distillation procedure using teacher-student strategy, which results in high throughput and accuracy. DeiT is pre-trained and fine-tuned o...
-
facebook-dinov2-base-imagenet1k-1-layer
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper DINOv2: Learning Robust Visual Features without Supervision by Oquab et al. and first released...
-
Facebook-DinoV2-Image-Embeddings-ViT-Base
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion with the DinoV2 method.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token ...
-
Facebook-DinoV2-Image-Embeddings-ViT-Giant
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion with the DinoV2 method.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token ...
-
The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 bi...
-
The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 bi...
-
The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 bi...
-
finiteautomata-bertweet-base-sentiment-analysis
Repository: https://github.com/finiteautomata/pysentimiento/
Model trained with SemEval 2017 corpus (around ~40k tweets). Base model is BERTweet, a RoBERTa model trained on English tweets.
Uses `POS...
-
| | | | -- | -- | | Score range | Integer [1-5]: 1 is the lowest quality and 5 is the highest quality. | | What is this metric? | Fluency measures the effectiveness and clarity of written communication, focusing on grammatical accuracy, vocabulary range, sentence complexity, coherence, and overa...
-
| | | | -- | -- | | Score range | Float [0-1]: higher means better quality. | | What is this metric? | The GLEU (Google-BLEU) score measures the similarity by shared n-grams between the generated text and ground truth, similar to the BLEU score, focusing on both precision and recall. But it addre...
-
The Vision Transformer (ViT) model, as introduced in the paper "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale" by Dosovitskiy et al., underwent pre-training on ImageNet-21k with a resolution of 224x224. Su...
-
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generat...
-
GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM)
See the [associated paper](https://d4mucfpksywv.cloudfront.net/bet...
-
GPT-2 Medium is the 355M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
See the [associated paper](https://d4mucfpksywv.c...
-
| | | | -- | -- | | Score range | Integer [1-5]: 1 is the lowest quality and 5 is the highest quality. | | What is this metric? | Groundedness measures how well the generated response aligns with the given context in a retrieval-augmented generation scenario, focusing on its relevance and accura...
-
| | | | -- | -- | | Score range | Boolean: [true, false]: false if response is ungrounded and true if it's grounded. | | What is this metric? | Groundedness Pro (powered by Azure AI Content Safety) detects whether the generated text response is consistent or accurate with respect to the given ...
-
Hateful and unfair content refers to any language pertaining to hate toward or unfair representations of individuals and social groups along factors including but not limited to race, ethnicity, nationality, gender, sexual orientation, religion, immigration status, ability, persona...
-
how-to-use-functions-with-GPT-chat-API
The "Use Functions with Chat Models" is a chat model illustrates how to employ the LLM tool's Chat API with external functions, thereby expanding the capabilities of GPT models. The Chat Completion API includes an optional 'functions' parameter, which can be used to stipulate function specificati...
-
Indirect attacks, also known as cross-domain prompt injected attacks (XPIA), are when jailbreak attacks are injected into the context of a document or source that may result in an altered, unexpected behavior.
Indirect attacks evaluations are broken down into three subcategories: ...
-
Summary: camembert-ner is a NER model fine-tuned from camemBERT on the Wikiner-fr dataset and was validated on email/chat data. It shows better performance on entities that do not start with an uppercase. The model has four classes: O, MISC, PER, ORG and LOC. The model can be loaded using Hugging...
-
Note: Use of this model is governed by the Meta license. Click on View License above.
Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion ...
Note: Use of this model is governed by the Meta license. Click on View License above.
Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion ...
Note: Use of this model is governed by the Meta license. Click on View License above.
Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion ...
Note: Use of this model is governed by the Meta license. Click on View License above.
Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion ...
Note: Use of this model is governed by the Meta license. Click on View License above.
Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion ...
Note: Use of this model is governed by the Meta license. Click on View License above.
Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion ...
-
mask_rcnn_swin-t-p4-w7_fpn_1x_coco
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual ...
-
Most medical imaging AI today is narrowly built to detect a small set of individual findings on a single modality like chest X-rays. This training approach is data- and computationally inefficient, requiring ~6-12 months per finding1, and often fails to generalize in real world environments. By f...
-
Biomedical image analysis is fundamental for biomedical discovery in cell biology, pathology, radiology, and many other biomedical domains. MedImageParse is a biomedical foundation model for imaging parsing that can jointly conduct segmentation, detection, and recognition across 9 imaging modalit...
-
| | | | -- | -- | | Score range | Float [0-1]: higher means better quality. | | What is this metric? | METEOR score measures the similarity by shared n-grams between the generated text and the ground truth, similar to the BLEU score, focusing on precision and recall. But it addresses limitations ...
-
microsoft-beit-base-patch16-224-pt22k-ft22k
BEiT (Bidirectional Encoder representation from Image Transformers) is a vision transformer(ViT) pre-trained with Masked Image Modeling(MIM), which is a self-supervised pre-training inspired by BERT from NLP, followed by Intermediate fine-tuning using ImageNet-22k dataset. It is then fine-tuned f...
-
DeBERTa (Decoding-enhanced BERT with Disentangled Attention) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data...
-
DeBERTa (Decoding-enhanced BERT with Disentangled Attention) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [offi...
-
DeBERTa (Decoding-enhanced BERT with Disentangled Attention) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data...
-
DeBERTa (Decoding-enhanced BERT with Disentangled Attention) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [offi...
-
DeBERTa (Decoding-enhanced BERT with Disentangled Attention) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data...
-
microsoft-llava-med-v1.5-mistral-7b
LLaVA-Med v1.5, using mistralai/Mistral-7B-Instruct-v0.2 as LLM for a better commercial license
Large Language and Vision Assistant for bioMedicine (i.e., “LLaVA-Med”) is a large language and vision model trained using a curriculum l...
-
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities. All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 ...
-
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities. All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 ...
-
Phi-1.5 is a Transformer-based language model with 1.3 billion parameters. It was trained on a combination of data sources, including an additional source of NLP synthetic texts. Phi-1.5 performs exceptionally well on benchmarks testing common sense, language understandi...
The phi-2 is a language model with 2.7 billion parameters. The phi-2 model was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assesse...
-
microsoft-swinv2-base-patch4-window12-192-22k
The Swin Transformer V2 model is a type of Vision Transformer, pre-trained on ImageNet-21k with a resolution of 192x192, is introduced in the research-paper titled "Swin Transformer V2: Scaling Up Capacity and Resolution" authored by ...
-
mistral-community-Mixtral-8x22B-v0-1
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/H...
-
mistralai-Mistral-7B-Instruct-v0-2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1:
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
For full details...
-
mistralai-Mistral-7B-Instruct-v0-3
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2
-
Extended vocabulary to 32768 ...
-
mistralai-Mistral-7B-Instruct-v01
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.
For full details of this mod...
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks tested.
For full details of this model please read paper and [releas...
-
mistralai-Mixtral-8x22B-Instruct-v0-1
The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.
Inference type | Python sample (Notebook) | CLI with YAML |
---|---|---|
Real time | <a href="https://aka.ms/... |
-
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
Mixtral-8x22B-v0.1 is a pretrained base model and therefore does not have any moderation mechanisms.
[Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/H...
-
mistralai-Mixtral-8x7B-Instruct-v01
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks with 6x faster inference.
Mixtral-8x7B-v0.1 is a decoder-only model with 8 distinct groups or the "experts". At every layer, for every tok...
The Mixtral-8x7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mixtral-8x7B-v0.1 outperforms Llama 2 70B on most benchmarks with 6x faster inference.
For full details of this model please read [release blog post](https://mi...
-
mmd-3x-deformable-detr_refine_twostage_r50_16xb2-50e_coco
deformable-detr_refine_twostage_r50_16xb2-50e_coco
model is from OpenMMLab's MMDetection library. DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while... -
mmd-3x-mask-rcnn_swin-t-p4-w7_fpn_1x_coco
mask-rcnn_swin-t-p4-w7_fpn_1x_coco
model is from OpenMMLab's MMDetection library. This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for comp... -
mmd-3x-rtmdet-ins_x_8xb16-300e_coco
rtmdet-ins_x_8xb16-300e_coco
model is from OpenMMLab's MMDetection library. In this paper, we aim to design an efficient real-time object detector that exceeds the YOLO series and is easily extensible for many o... -
mmd-3x-sparse-rcnn_r101_fpn_300-proposals_crop-ms-480-800-3x_coco
sparse-rcnn_r101_fpn_300-proposals_crop-ms-480-800-3x_coco
model is from OpenMMLab's MMDetection library. We present Sparse R-CNN, a purely sparse method for object detection in images. Existing works on object ... -
mmd-3x-sparse-rcnn_r50_fpn_300-proposals_crop-ms-480-800-3x_coco
sparse-rcnn_r50_fpn_300-proposals_crop-ms-480-800-3x_coco
model is from OpenMMLab's MMDetection library. We present Sparse R-CNN, a purely sparse method for object detection in images. Existing works on object d... -
mmd-3x-vfnet_r50-mdconv-c3-c5_fpn_ms-2x_coco
vfnet_r50-mdconv-c3-c5_fpn_ms-2x_coco
model is from OpenMMLab's MMDetection library. Accurately ranking the vast number of candidate detections is crucial for dense object detectors to achieve high performance. ... -
mmd-3x-vfnet_x101-64x4d-mdconv-c3-c5_fpn_ms-2x_coco
vfnet_x101-64x4d-mdconv-c3-c5_fpn_ms-2x_coco
model is from OpenMMLab's MMDetection library. Accurately ranking the vast number of candidate detections is crucial for dense object detectors to achieve high perfor... -
mmd-3x-yolof_r50_c5_8x8_1x_coco
yolof_r50_c5_8x8_1x_coco
model is from OpenMMLab's MMDetection library. This paper revisits feature pyramids networks (FPN) for one-stage detectors and points out that the success of FPN is due to its divide-an... -
Multimodal Early Fusion Transformer (MMEFT) is a transformer-based model tailored for processing both structured and unstructured data.
It can be used for multi-class and multi-label multimodal classification tasks, and is capable of handling datasets with features from diverse modes, includ...
-
This "Multi-Source Rerank Q&A" demonstrates Q&A application, enabled by reranking data from multiple sources and powered by GPT. It utilizes indexed files and the rerank tool from Azure Machine Learning to provide grounded answers. You can ask a wide range of questions and receive responses based...
-
ocsort_yolox_x_crowdhuman_mot17-private-half
ocsort_yolox_x_crowdhuman_mot17-private-half
model is from OpenMMLab's MMTracking library. Multi-Object Tracking (MOT) has rapidly progressed with the development of object detection and re-identification. Howev... -
OpenAI-CLIP-Image-Text-Embeddings-vit-base-patch32
OpenAI's CLIP (Contrastive Language–Image Pre-training) model was designed to investigate the factors that contribute to the robustness of computer vision tasks. It can seamlessly adapt to a range of image classification tasks without requiring specific training for each, demonstrating efficiency...
-
OpenAI-CLIP-Image-Text-Embeddings-ViT-Large-Patch14-336
The
CLIP
model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general ... -
OpenAI's CLIP (Contrastive Language–Image Pre-training) model was designed to investigate the factors that contribute to the robustness of computer vision tasks. It can seamlessly adapt to a range of image classification tasks without requiring specific training for each, demonstrating efficiency...
-
OpenAI's CLIP (Contrastive Language–Image Pre-training) model was designed to investigate the factors that contribute to the robustness of computer vision tasks. It can seamlessly adapt to a range of image classification tasks without requiring specific training for each, demonstrating efficiency...
-
Whisper is an OpenAI pre-trained speech recognition model with potential applications for ASR solutions for developers. However, due to weak supervision and large-scale noisy data, it should be used with caution in high-risk domains. The model has been trained on 680k hours of audio data represen...
-
Whisper is a model that can recognize and translate speech using deep learning. It was trained on a large amount of data from different sources and languages. Whisper models can handle various tasks and domains without needing to adjust the model.
Whisper large-v3 is similar to the previous larg...
-
The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Ph...
-
The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-...
-
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
After initi...
-
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3...
-
The Phi-3-Small-128K-Instruct is a 7B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model supports 128K conte...
-
The Phi-3-Small-8K-Instruct is a 7B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model supports 8K context l...
-
Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 mo...
-
Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length. Th...
-
Phi-3.5-MoE is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available documents - with a focus on very high-quality, reasoning dense data. The model supports multilingual and comes with 128K context length (in tokens). The mo...
-
Phi-3.5-vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and t...
-
This flow template is an advanced RAG flow modeled on the implementation of Azure AI Playground - on Your Data. The flow consists of tools that rewrites user query input into one or more queries based on chat history context using LLM, retrieves data for rewritten queries from the data index and ...
-
PRISM is a multi-modal generative foundation model for slide-level analysis of H&E-stained histopathology images. Utilizing Virchow tile embeddings and clinical report texts for pre-training, PRISM combines these embeddings into a single slide embedding and generates a text-based diagnostic repor...
-
Click to expand
-
[Training...
-
Click to expand
-
[Training...
-
projecte-aina-FLOR-1-3B-Instructed
Click to expand
Click to expand
-
[Training...
-
projecte-aina-FLOR-6-3B-Instructed
Click to expand
Protected material is any text that is under copyright, including song lyrics, recipes, and articles. Protected material evaluation leverages the Azure AI Content Safety Protected Material for Text service to perform the classification.
Protected Material evaluations ...
Digital pathology poses unique computational challenges, as a standard gigapixel slide may comprise tens of thousands of image tiles[^1],[^2],[^3]. Previous models often rely predominantly on tile-level predictions, which can overlook critical slide-level context and spatial depen...
-
The "QnA Ada Similarity Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to...
-
The "QnA Coherence Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to achi...
-
The "QnA F1 Score Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems using f1 score based on the word counts in predicted answer and ground truth.
Inference type | CLI | VS Code Extension |
---|---|---|
Real time | <a href="https://microsoft.github.io... |
-
The "QnA Fluency Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to achiev...
-
The "QnA GPT Similarity Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to...
-
The "QnA Groundedness Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to a...
-
The Q&A evaluation flow will evaluate the Q&A systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with human evaluations compa...
-
The Q&A quality and safety evaluation flow will evaluate the Q&A systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with huma...
-
The Q&A RAG (Retrieval Augmented Generation) evaluation flow will evaluate the Q&A RAG systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses . Utilizing GPT model to assist with measurements aims to achieve a high agreement with...
-
The "QnA Relevance Evaluation" is a model to evaluate the Q&A Retrieval Augmented Generation systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT-3.5 as the Language Model to assist with measurements aims to achi...
-
qna-with-your-own-data-using-faiss-index
The "QnA with Your Own Data Using Faiss Index" is a Q&A model with GPT3.5 using information from vector search to make the answer more grounded. It involves embedding user's question with LLM, and then using Faiss Index Lookup to find relevant documents based on vectors. By utilizing vector searc...
-
The Q&A quality and safety evaluation flow will evaluate the Q&A systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with huma...
-
The Q&A quality and safety evaluation flow will evaluate the Q&A systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of your responses. Utilizing GPT and GPT embedding model to assist with measurements aims to achieve a high agreement with huma...
-
| | | | -- | -- | | Score range | Integer [1-5]: 1 is the lowest quality and 5 is the highest quality. | | What is this metric? | Coherence measures the logical and orderly presentation of ideas in a response, allowing the reader to easily follow and understand the writer's train of thought. A c...
-
This "Index Data Rerank Q&A" demonstrates Q&A application, enabled by reranking data from vector index stores and powered by GPT. It utilizes index stores and the rerank tool from Azure Machine Learning to provide grounded answers. You can ask a wide range of questions and receive responses based...
-
| | | | -- | -- | | Score range | Integer [1-5]: 1 is the lowest quality and 5 is the highest quality. | | What is this metric? | Retrieval measures the quality of search without ground truth. It focuses on how relevant the context chunks (encoded as a string) are to address a query and how the ...
-
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate i...
-
The RoBERTa base OpenAI Detector functions as a model designed to detect outputs generated by the GPT-2 model. It was created by refining a RoBERTa base model using the outputs of the 1.5B-parameter GPT-2 model. This detector is utilized to determine whether text was generated by a GPT-2 model. O...
-
RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate i...
-
roberta-large-mnli is the RoBERTa large model fine-tuned on the Multi-Genre Natural Language Inference (MNLI) corpus. The model is a pretrained model on English language text using a masked language modeling ...
-
RoBERTa large OpenAI Detector is the GPT-2 output detector model, obtained by fine-tuning a RoBERTa large model with the outputs of the 1.5B-parameter GPT-2 model. The model can be used to predict if text was generated by a GPT-2 model. This model was released by OpenAI at the same time as Op...
-
| | | | -- | -- | | Score range | Float [0-1]: higher means better quality. | | What is this metric? | ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a set of metrics used to evaluate automatic summarization and machine translation. It measures the overlap between generated text and...
-
runwayml-stable-diffusion-inpainting
Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.
The Stable-Diffusion-Inpainting was initialized with the weights of the Stable-Diffus...
-
runwayml-stable-diffusion-v1-5
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. The Stable-Diffusion-v1-5 checkpoint was initialized with the weights of the Stable-Diffusion-v1-2 checkpoint and subsequently fine-tuned on 595k steps at resolution 5...
-
Salesforce-BLIP-2-opt-2-7b-image-to-text
The BLIP-2
model, utilizing OPT-2.7b (a large language model with 2.7 billion parameters), is presented in the paper titled "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models". T...
-
Salesforce-BLIP-2-opt-2-7b-vqa
The
BLIP-2
model, utilizing OPT-2.7b (a large language model with 2.7 billion parameters), is presented in the paper titled "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models". Th... -
Salesforce-BLIP-image-captioning-base
BLIP
(Bootstrapping Language-Image Pre-training) designed for unified vision-language understanding and generation is a new VLP framework that expands the scope of downstream tasks compared to existing methods. The framework encompasses two key contributions from both model and data perspective... -
BLIP
(Bootstrapping Language-Image Pre-training) designed for unified vision-language understanding and generation is a new VLP framework that expands the scope of downstream tasks compared to existing methods. The framework encompasses two key contributions from both model and data perspective... -
Self-Harm-Related-Content-Evaluator
Self-harm-related content includes language pertaining to actions intended to hurt, injure, or damage one's body or kill oneself.
Safety evaluations annotate self-harm-related content using a 0-7 scale.
Very Low (0-1) refers to
- Content that contains self-...
Sexual content includes language pertaining to anatomical organs and genitals, romantic relationships, acts portrayed in erotic terms, pregnancy, physical sexual acts (including assault or sexual violence), prostitution, pornography, and sexual abuse.
Safety eva...
-
| | | | -- | -- | | Score range | Integer [1-5]: 1 is the lowest quality and 5 is the highest quality. | | What is this metric? | Similarity measures the degrees of similarity between the generated text and its ground truth with respect to a query. | | How does it work? | The similarity metric i...
-
Arctic is a dense-MoE Hybrid transformer architecture pre-trained from scratch by the Snowflake AI Research Team. We are releasing model checkpoints for both the base and instruct-tuned versions of Arctic under an Apache-2.0 license. This means you can use them freely in your ow...
Arctic is a dense-MoE Hybrid transformer architecture pre-trained from scratch by the Snowflake AI Research Team. We are releasing model checkpoints for both the base and instruct-tuned versions of Arctic under an Apache-2.0 license. This means you can use them freely in your ow...
-
sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco
sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco
model is from OpenMMLab's MMDetection library. This model is reported to obtain <a href="https://github.com/open-mmlab/mmdetection/blob/e9cae2d078... -
sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco
sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco
model is from OpenMMLab's MMDetection library. This model is reported to obtain <a href="https://github.com/open-mmlab/mmdetection/blob/e9cae2d0787... -
The RoBERTa Large model is a large transformer-based language model that was developed by the Hugging Face team. It is pre-trained on masked language modeling and can be used for tasks such as sequence classification, token classification, or question answering. Its primary usage is as a fine-tun...
-
stabilityai-stable-diffusion-2-1
This
stable-diffusion-2-1
model is fine-tuned from stable-diffusion-2 (768-v-ema.ckpt
) with an additional 55k steps on the same dataset (withpunsafe=0.1
), and then fine-tuned for another 155k extra steps withpunsafe=0.98
.
The mod...
-
stabilityai-stable-diffusion-2-inpainting
This
stable-diffusion-2-inpainting
model is resumed from stable-diffusion-2-base (512-base-ema.ckpt
) and trained for another 200k steps. Follows the mask-generation strategy presented in LAMA wh... -
stabilityai-stable-diffusion-xl-base-1-0
SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) is used to generate (noisy) latents, wh...
-
stabilityai-stable-diffusion-xl-refiner-1-0
SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) is used to generate (noisy) latents, wh...
-
The developers of the Text-To-Text Transfer Transformer (T5) write:
With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to B...
With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to B...
With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to B...
-
The "Template Chat Flow" is a chat model using GPT3.5 that generates the next message based on the conversation history and the latest chat content.
Inference type | CLI | VS Code Extension |
---|---|---|
Real time | <a href="https://microsoft.github.io/promptflow/how-to-guides/dep... |
-
The "Template Evaluation Flow" is a evaluate model to measure how well the output matches the expected criteria and goals.
Inference type | CLI | VS Code Extension |
---|---|---|
Real time | <a href="https://microsoft.github.io/promptflow/how-to-guides/deploy-a-flow/index.html" tar... |
-
The "Template Standard Flow" is a model using GPT3.5 to generate a joke based on user input.
Inference type | CLI | VS Code Extension |
---|---|---|
Real time | deploy-promptflow... |
Falcon-40B is a large language model (LLM) developed by the Technology Innovation Institute (TII) with 40 billion parameters. It is a causal decoder-only model trained on 1 trillion tokens from the RefinedWeb dataset, enhanced with curated corpora. Falcon-40B supports English, Germa...
Falcon-40B-Instruct is a large language model with 40 billion parameters, developed by TII. It is a causal decoder-only model fine-tuned on a mixture of Baize data and is released under the Apache 2.0 license. This model is optimized for inference and features FlashAttention and mul...
Falcon-7B is a large language model with 7 billion parameters. It is a causal decoder-only model developed by TII and trained on 1,500 billion tokens of RefinedWeb dataset, which was enhanced with curated corpora. The model is available under the Apache 2.0 license. It outperforms c...
Falcon-7B-Instruct is a large language model with 7 billion parameters, developed by TII. It is a causal decoder-only model and is released under the Apache 2.0 license. This model is optimized for inference and features FlashAttention and multiquery architectures. It is primarily d...
-
vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco
vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco
model is from OpenMMLab's MMDetection library. This model is reported to obtain <a href="https://github.com/open-mmlab/mmdetection/blob/e9cae2d0787cd5c2fc6165a6061f92f... -
vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco
vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco
model is from OpenMMLab's MMDetection library. This model is reported to obtain <a href="https://github.com/open-mmlab/mmdetection/blob/e9cae2d0787cd5c2fc6165a6... -
Violent content includes language pertaining to physical actions intended to hurt, injure, damage, or kill someone or something. It also includes descriptions of weapons and guns (and related entities such as manufacturers and associations).
Safety evaluations ...
-
Virchow is a self-supervised vision transformer pretrained using 1.5M whole slide histopathology images. The model can be used as a tile-level feature extractor (frozen or finetuned) to achieve state-of-the-art results for a wide variety of downstream computational pathology use cases.
-
Virchow2 is a self-supervised vision transformer pretrained using 3.1M whole slide histopathology images. The model can be used as a tile-level feature extractor (frozen or finetuned) to achieve state-of-the-art results for a wide variety of downstream computational pathology use cases.
-
The "Web Classification" is a model demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.
Inference type | CLI | VS Code Extension |
---|---|---|
Real... |
-
yolof_r50_c5_8x8_1x_coco
model is from OpenMMLab's MMDetection library. This model is reported to obtain <a href="https://github.com/open-mmlab/mmdetection/blob/e9cae2d0787cd5c2fc6165a6061f92fa09e48fb1/configs/...