Skip to content

Releases: Unbabel/COMET

v2.0.0

13 Mar 16:54
Compare
Choose a tag to compare
  • New model architecture (UnifiedMetric) inspired by UniTE.
    - This model uses cross-encoding (similar to BLEURT), works with and without references and can be trained in a multitask setting. This model is also implemented in a very flexible way where we can decide to train using just source and MT, reference and MT or source, MT and reference.

  • New encoder models RemBERT and XLM-RoBERTa-XL

  • New training features:
    - System-level accuracy (Kocmi et al, 2021) reported during validation (only if validation files has a system column).
    - Support for multiple training files (each file will be loaded at the end of the corresponding epoch): This is helpful to train with large datasets and to train following a curriculum.
    - Support for multiple validation files: Before we were using 1 single validation file with all language pairs concatenated which has an impact in correlations. With this change we now can have 1 validation file for each language and correlations will be averaged over all validation sets. This also allows for the use of validation files where the ground truth scores are in different scales.
    - Support to HuggingFace Hub: Models can now be easily added to HuggingFace Hub and used directly using the CLI

  • With this release we also add New models from WMT 22:
    1) We won the WMT 22 QE shared task: Using UnifiedMetric it should be easy to replicate our final system, nonetheless we are planning to release the system that was used: wmt22-cometkiwi-da which performs strongly both on data from the QE task (MLQE-PE corpus) and on data from metrics task (MQM annotations).
    2) We were 2nd in the Metrics task (1st place was MetricXL a 6B parameter metric trained on top of mT5-XXL). Our new model wmt22-comet-da was part of the ensemble used to secure our result.

If you are interested in our work from this year please read the following paper:

And the corresponding findings papers:

Special thanks to all the involved people: @mtreviso @nunonmg @glushkovato @chryssa-zrv @jsouza @DuarteMRAlves @Catarinafarinha @cmaroti

v1.1.3

13 Jan 14:44
Compare
Choose a tag to compare

Same as v1.1.2 but we bumped some requirements in order to be easier to use COMET on Windows and Apple M1.

Version 1.1.2

06 Jun 18:59
Compare
Choose a tag to compare

Just minor requirement updates to avoid installation errors described in #82

Version 1.1.1

01 Jun 22:47
Compare
Choose a tag to compare
  1. comet-compare to support multiple system comparisons.
  2. Bugfix: Broken link for wmt21-comet-qe-da (#78)
  3. Bugfix: protobuf dependency (#82)
  4. New models from Cometinho EAMT 22 paper (eamt22-cometinho-da & eamt22-comet-prune-da)

Breaking Changed

comet-compare does not support -xand -y flags. Now it receives a single flag -t with multiple arguments for multiples systems.

Before:

comet-compare -s src.de -x hyp1.en -y hyp2.en -r ref.en

After:

comet-compare -s src.de -t hyp1.en hyp2.en -r ref.en

Contributors

Full Changelog: v1.0.1...v1.1.0

Version 1.1.0

02 Apr 18:52
Compare
Choose a tag to compare
  1. Updated documentation
  2. Updated Pytorch Lightning version to avoid security vulnerabilities (Untrusted Data & Code Injection)
  3. Inspired by Amrhein et al, 2022 we added the comet-mbr command for fast Minimum Bayes Risk Decoding.
  4. New encoder models

What's Changed

New Contributors

Full Changelog: v1.0.1...v1.1.0

Version 1.0.1

19 Nov 15:51
Compare
Choose a tag to compare

Scipy missing from dependencies list.

Version 1.0.0

19 Nov 14:59
Compare
Choose a tag to compare

What's new?

  1. comet-compare command for statistical comparison between two models
  2. comet-score with multiple hypothesis/systems
  3. Embeddings caching for faster inference (thanks to @jsouza).
  4. Length Batching for faster inference (thanks to @CoderPat)
  5. Integration with SacreBLEU for dataset downloading (thanks to @mjpost)
  6. Monte-carlo Dropout for uncertainty estimation (thanks to @glushkovato and @chryssa-zrv)
  7. Some code refactoring

Hopefully, this version is also easier to install than the previous one that relied on fairseq.

Version 0.1.0

11 Mar 17:55
04456c4
Compare
Choose a tag to compare
  • We now use Poetry to solve dependency issues.
  • Removed LASER encoder and FastBPE dependencies (Windows users can now run COMET)
  • Removed references requirements for QE models