Releases: Unbabel/COMET
v2.2.4
v2.2.3
Minor bug fixes and functionalities since v2.2.2
v2.2.1
v2.2.0
Release of xCOMET models!
To make it easier to merge with internal code we added a new class XCOMETMetric. This release only allows inference of such models. Training is still not fully implemented.
Models can be accessed through hugging face hub:
v2.1.1
Minor bug fix on MBR default model.
Rollback to CometKiwi-22 as default QE model (due to being lightweight)
Minor dependencies update
v2.1.0
v2.0.2
Minor bug fix, update HF hub and released CometKiwi model:
v2.0.1
v2.0.0
-
New model architecture (UnifiedMetric) inspired by UniTE.
- This model uses cross-encoding (similar to BLEURT), works with and without references and can be trained in a multitask setting. This model is also implemented in a very flexible way where we can decide to train using just source and MT, reference and MT or source, MT and reference. -
New encoder models RemBERT and XLM-RoBERTa-XL
-
New training features:
- System-level accuracy (Kocmi et al, 2021) reported during validation (only if validation files has asystem
column).
- Support for multiple training files (each file will be loaded at the end of the corresponding epoch): This is helpful to train with large datasets and to train following a curriculum.
- Support for multiple validation files: Before we were using 1 single validation file with all language pairs concatenated which has an impact in correlations. With this change we now can have 1 validation file for each language and correlations will be averaged over all validation sets. This also allows for the use of validation files where the ground truth scores are in different scales.
- Support to HuggingFace Hub: Models can now be easily added to HuggingFace Hub and used directly using the CLI -
With this release we also add New models from WMT 22:
1) We won the WMT 22 QE shared task: Using UnifiedMetric it should be easy to replicate our final system, nonetheless we are planning to release the system that was used:wmt22-cometkiwi-da
which performs strongly both on data from the QE task (MLQE-PE corpus) and on data from metrics task (MQM annotations).
2) We were 2nd in the Metrics task (1st place was MetricXL a 6B parameter metric trained on top of mT5-XXL). Our new modelwmt22-comet-da
was part of the ensemble used to secure our result.
If you are interested in our work from this year please read the following paper:
- CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
- COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task
And the corresponding findings papers:
- Findings of the WMT 2022 Shared Task on Quality Estimation
- Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust
Special thanks to all the involved people: @mtreviso @nunonmg @glushkovato @chryssa-zrv @jsouza @DuarteMRAlves @Catarinafarinha @cmaroti
v1.1.3
Same as v1.1.2 but we bumped some requirements in order to be easier to use COMET on Windows and Apple M1.