A list of peer reviewed research papers on the computation, analysis and application of transferability in transfer learning. (work in progress)
Analytical metrics refer to methods that do not need to train the transfer network (e.g. fine-tuning) on the target data. We group the papers by whether they mainly focus on the single-source transfer setting or the multi-source transfer setting.
- Transferability and Hardness of Supervised Classification Tasks (2019) (2019)
- An Information-Theoretic Approach to Transferability in Task Transfer Learning (2019)
- LEEP: A New Measure to Evaluate Transferability of Learned Representations (2020)
- Geometric Dataset Distances via Optimal Transport (2020)
- OTCE: A Transferability Metric for Cross-Domain Cross-Task Representations. (2021)
- Logme: Practical assessment of pre-trained models for transfer learning.(2021)
- Transferability Estimation using Bhattacharyya Class Separability(2021)
- Transferability Estimation Based On Principal Gradient Expectation(2022)
- Wasserstein Task Embedding for Measuring Task Similarities (2022)
- Feasibility and Transferability of Transfer Learning: A Mathematical Framework (2023)
- DataMap: Dataset transferability map for medical image classification(2023)
- Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks (2019)
- A Mathematical Framework for Quantifying Transferability in Multi-source Transfer Learning (2021)
- Transferability Metrics for Selecting Source Model Ensembles (2022)
Below we list empircal studies on transferability measurement or interpretation in different fields.
- How transferable are features in deep neural networks? (2014)
- Taskonomy: Disentangling Task Transfer Learning (2018)
- A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark (2019)