Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
-
Updated
Feb 18, 2025 - Python
Representation learning is a set of techniques in machine learning that automatically discover compact and meaningful features from raw data. It underpins modern advances in natural language processing, computer vision, and speech recognition.
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
Reading list for research topics in multimodal machine learning
SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
EVA Series: Visual Representation Fantasies from BAAI
A curated list of network embedding techniques.
Self-Supervised Speech Pre-training and Representation Learning Toolkit
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
Datasets, tools, and benchmarks for representation learning of code.
Python library for Representation Learning on Knowledge Graphs https://docs.ampligraph.org
The implementation of DeBERTa
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
A collection of research on knowledge graphs
Code for ALBEF: a new vision-language pre-training method
SCAN: Learning to Classify Images without Labels, incl. SimCLR. [ECCV 2020]
Unified Training of Universal Time Series Forecasting Transformers
Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".
Code for TKDE paper "Self-supervised learning on graphs: Contrastive, generative, or predictive"
A comprehensive list of awesome contrastive self-supervised learning papers.
GraphVite: A General and High-performance Graph Embedding System
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️