数据集:
wmt14
警告:Common Crawl语料库数据存在问题( training-parallel-commoncrawl.tgz ):
我们已经联系了WMT组织者。
基于statmt.org数据的翻译数据集。
不同年份的版本使用了多个数据源的组合。基于wmt基础可以通过选择自己的数据/语言对来创建自定义数据集。可以按照以下方式进行:
from datasets import inspect_dataset, load_dataset_builder
inspect_dataset("wmt14", "path/to/scripts")
builder = load_dataset_builder(
    "path/to/scripts/wmt_utils.py",
    language_pair=("fr", "de"),
    subsets={
        datasets.Split.TRAIN: ["commoncrawl_frde"],
        datasets.Split.VALIDATION: ["euelections_dev2019"],
    },
)
# Standard version
builder.download_and_prepare()
ds = builder.as_dataset()
# Streamable version
ds = builder.as_streaming_dataset()
 'train'的一个示例如下所示。
数据字段在所有拆分之间相同。
cs-en| name | train | validation | test | 
|---|---|---|---|
| cs-en | 953621 | 3000 | 3003 | 
@InProceedings{bojar-EtAl:2014:W14-33,
  author    = {Bojar, Ondrej  and  Buck, Christian  and  Federmann, Christian  and  Haddow, Barry  and  Koehn, Philipp  and  Leveling, Johannes  and  Monz, Christof  and  Pecina, Pavel  and  Post, Matt  and  Saint-Amand, Herve  and  Soricut, Radu  and  Specia, Lucia  and  Tamchyna, Ale
{s}},
  title     = {Findings of the 2014 Workshop on Statistical Machine Translation},
  booktitle = {Proceedings of the Ninth Workshop on Statistical Machine Translation},
  month     = {June},
  year      = {2014},
  address   = {Baltimore, Maryland, USA},
  publisher = {Association for Computational Linguistics},
  pages     = {12--58},
  url       = {http://www.aclweb.org/anthology/W/W14/W14-3302}
}
 感谢 @thomwolf , @patrickvonplaten 添加了这个数据集。