The metrics used for the challenge include:
- BLEU + NIST from MT-Eval,
- METEOR, ROUGE-L, CIDEr from the MS-COCO Caption evaluation scripts.
The metrics script requires the following dependencies:
- Java 1.8
- Python 3.6+ with matplotlib and scikit-image packages
- Perl 5.8.8 or higher with the XML::Twig CPAN module
To install the required Python packages, run (assuming root access or virtualenv):
pip install -r requirements.txt
To install the required Perl module, run (assuming root access or perlbrew/plenv):
curl -L https://cpanmin.us | perl - App::cpanminus # install cpanm
cpanm XML::Twig
The main entry point is measure_scores.py. To get a listing of all available options, run:
./measure_scores.py -h
The system outputs and human references can either be in a TSV/CSV format, or in plain text. This is
distinguished by the file extension (plain text assumed, unless it's .tsv
or .csv
).
For TSV/CSV, the script assumes that the first column contains source MRs/texts and the second column contains system outputs or references. Multiple references for the same source MRs/texts are grouped automatically (either by the same source as in the system output file, if it's also a TSV/CSV, or by consecutive identical sources). If there are headers in the TSV/CSV file with reasonably identifiable labels (e.g. “MR”, “source”, “system output”, “reference” etc., there's some guessing involved), the columns should be identified automatically. In that case, the file doesn't need to have just two columns in the exact order.
For plain text files, the script assumes one instance per line for your system outputs and one entry per line or multiple references for the same instance separated by empty lines for the references (see TGen data conversion).
Example human reference and system output files are provided in the example-inputs subdirectory -- you can try the script on them using this command:
./measure_scores.py example-inputs/devel-conc.txt example-inputs/baseline-output.txt
We used the NIST MT-Eval v13a script adapted for significance tests, from http://www.cs.cmu.edu/~ark/MT/. We adapted the script to allow a variable number of references.
These provide a different variant of BLEU (which is not used for evaluation in the E2E challenge), METEOR, ROUGE-L, CIDER. We used the Github code for these metrics. The metrics are unchanged, apart from removing support for images and some of the dependencies.
- Microsoft COCO Captions: Data Collection and Evaluation Server
- PTBTokenizer: We use the Stanford Tokenizer which is included in Stanford CoreNLP 3.4.1.
- BLEU: BLEU: a Method for Automatic Evaluation of Machine Translation
- NIST: Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics
- Meteor: Project page with related publications. We use the latest version (1.5) of the Code. Changes have been made to the source code to properly aggreate the statistics for the entire corpus.
- Rouge-L: ROUGE: A Package for Automatic Evaluation of Summaries
- CIDEr: CIDEr: Consensus-based Image Description Evaluation
Original developers of the MSCOCO evaluation scripts:
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, David Chiang, Michael Denkowski, Alexander Rush