Skip to content

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021

License

Notifications You must be signed in to change notification settings

mzolfaghari/crossmodal-contrastive-learning

 
 

Repository files navigation

CrossCLR - ICCV 2021

This is the official implementation of paper:

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations [Paper]

Authors: Mohammadreza Zolfaghari, Yi Zhu, Peter Gehler, Thomas Brox,

Update

[06 Feb 2022] CrossCLR code released
[Dec 2021] CrossCLR-onlyIntraModality released

Loss Function

The loss function CrossCLR in loss.py takes video features and text features as input, and return the loss.

Usage:

from trainer.loss import CrossCLR_onlyIntraModality

# define loss with a temperature `temp` and weights for negative samples `w`
criterion = CrossCLR_onlyIntraModality(temperature=temp, negative_weight=w)

# features: [bsz, f_dim]
video_features = ...
text_features = ...

# CrossCLR
loss = criterion(video_features, text_features)

...

Qualitative samples

Reference

@article{crossclr_aws_21,
  author    = {Mohammadreza Zolfaghari and
               Yi Zhu and
               Peter V. Gehler and
               Thomas Brox},
  title     = {CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations},
  url       = {https://arxiv.org/abs/2109.14910},
  eprinttype = {arXiv},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month     = {October},
  year      = {2021},
}

Security

See CONTRIBUTING for more information.

Acknowledgements

Part of this code is inspired by Simon Ging's COOT implementation [Code].

License

This project is licensed under the Apache-2.0 License.

About

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations, ICCV 2021

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.3%
  • Shell 0.7%