英文

opus-mt-tc-big-fr-en

用于将法语(fra)翻译为英语(eng)的神经机器翻译模型。

该模型是 OPUS-MT project 的一部分,这是一个旨在为世界上许多语言提供广泛可用和可访问的神经机器翻译模型的努力。所有模型最初都是使用 Marian NMT 的出色框架进行训练的,该框架是用纯C++编写的高效NMT实现。这些模型使用转换器库(huggingface)将其转换为pyTorch。训练数据来自 OPUS ,并且训练流程使用 OPUS-MT-train 的步骤。

@inproceedings{tiedemann-thottingal-2020-opus,
    title = "{OPUS}-{MT} {--} Building open translation services for the World",
    author = {Tiedemann, J{\"o}rg  and Thottingal, Santhosh},
    booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
    month = nov,
    year = "2020",
    address = "Lisboa, Portugal",
    publisher = "European Association for Machine Translation",
    url = "https://aclanthology.org/2020.eamt-1.61",
    pages = "479--480",
}

@inproceedings{tiedemann-2020-tatoeba,
    title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
    author = {Tiedemann, J{\"o}rg},
    booktitle = "Proceedings of the Fifth Conference on Machine Translation",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.wmt-1.139",
    pages = "1174--1182",
}

模型信息

用法

简短的示例代码:

from transformers import MarianMTModel, MarianTokenizer

src_text = [
    "J'ai adoré l'Angleterre.",
    "C'était la seule chose à faire."
]

model_name = "pytorch-models/opus-mt-tc-big-fr-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))

for t in translated:
    print( tokenizer.decode(t, skip_special_tokens=True) )

# expected output:
#     I loved England.
#     It was the only thing to do.

您还可以使用transformers工具包的pipelines功能,例如:

from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fr-en")
print(pipe("J'ai adoré l'Angleterre."))

# expected output: I loved England.

基准测试

langpair testset chr-F BLEU #sent #words
fra-eng tatoeba-test-v2021-08-07 0.73772 59.8 12681 101754
fra-eng flores101-devtest 0.69350 46.0 1012 24721
fra-eng multi30k_test_2016_flickr 0.68005 49.7 1000 12955
fra-eng multi30k_test_2017_flickr 0.70596 52.0 1000 11374
fra-eng multi30k_test_2017_mscoco 0.69356 50.6 461 5231
fra-eng multi30k_test_2018_flickr 0.65751 44.9 1071 14689
fra-eng newsdiscussdev2015 0.59008 34.4 1500 27759
fra-eng newsdiscusstest2015 0.62603 40.2 1500 26982
fra-eng newssyscomb2009 0.57488 31.1 502 11818
fra-eng news-test2008 0.54316 26.5 2051 49380
fra-eng newstest2009 0.56959 30.4 2525 65399
fra-eng newstest2010 0.59561 33.4 2489 61711
fra-eng newstest2011 0.60271 33.8 3003 74681
fra-eng newstest2012 0.59507 33.6 3003 72812
fra-eng newstest2013 0.59691 34.8 3000 64505
fra-eng newstest2014 0.64533 39.4 3003 70708
fra-eng tico19-test 0.63326 41.3 2100 56323

致谢

该工作得到 European Language Grid pilot project 2866 的支持,由 FoTran project 提供资金支持,该项目由欧盟的Horizon 2020研究和创新计划(授权协议号:771113)下的欧洲研究委员会(ERC)资助,以及 MeMAD project 项目,该项目由欧盟的Horizon 2020研究和创新计划资助,协议编号为780069。我们还感谢 CSC -- IT Center for Science 提供的慷慨计算资源和IT基础设施,芬兰。

模型转换信息

  • transformers 版本:4.16.2
  • OPUS-MT git哈希值:3405783
  • 转换时间:2022年4月13日19:02:28 EEST
  • 转换机器:LM0-400-22516.local