[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
-
Updated
Oct 3, 2024 - Python
[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.
SafeSora is a human preference dataset designed to support safety alignment research in the text-to-video generation field, aiming to enhance the helpfulness and harmlessness of Large Vision Models (LVMs).
A Toolkit for Distributional Control of Generative Models
Models, data, and codes for the paper: MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models
An implementation of Learning Online with trajectory Preference guidancE (LOPE) in PyTorch
Fine-tuning Language Models with Conditioning on Two Human Preferences
Fine-tuning LLMs using conditional training to learn two human preferences. UCL Module Project: Statistical Natural Language Processing (COMP0087).
Analyze a dataset of conversations from the Chatbot Arena, where various LLMs provide responses to user prompts. The goal is to develop a model that enhances chatbot interactions, ensuring they align more closely with human preferences.
Official repository for "Text2Interaction: Establishing Safe and Preferable Human-Robot Interaction," presented at CoRL 2024.
Add a description, image, and links to the human-preferences topic page so that developers can more easily learn about it.
To associate your repository with the human-preferences topic, visit your repo's landing page and select "manage topics."