π€ awesome-semantic-segmentation
-
Updated
May 8, 2021
π€ awesome-semantic-segmentation
πͺ’ Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. πYC W23
Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
Python package for the evaluation of odometry and SLAM
Building a modern functional compiler from first principles. (http://dev.stephendiehl.com/fun/)
Klipse is a JavaScript plugin for embedding interactive code snippets in tech blogs.
SuperCLUE: δΈζιη¨ε€§ζ¨‘εη»Όεζ§εΊε | A Benchmark for Foundation Models in Chinese
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
End-to-end Automatic Speech Recognition for Madarian and English in Tensorflow
π§ Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 π
A unified evaluation framework for large language models
An open-source visual programming environment for battle-testing prompts to LLMs.
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications. We provide grades for 20+ preconfigured checks (covering language, code, embedding use-cases), perform root cause analysis on failure cases and give insights on how to resolve them.
π€ Evaluate: A library for easily evaluating machine learning models and datasets.
Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
βοΈ π π π Evaluating state of the art in AI
(IROS 2020, ECCVW 2020) Official Python Implementation for "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"
Add a description, image, and links to the evaluation topic page so that developers can more easily learn about it.
To associate your repository with the evaluation topic, visit your repo's landing page and select "manage topics."