英文

opus-mt-tc-big-zle-en

神经机器翻译模型,用于将东斯拉夫语言(zle)翻译成英语(en)。

该模型是 OPUS-MT project 的一部分,旨在使神经机器翻译模型广泛可用和易于访问,适用于世界上许多语言。所有模型都是使用 Marian NMT 的出色框架进行训练的,该框架是用纯C++编写的高效NMT实现。这些模型已经使用huggingface的transformers库转换为pyTorch。训练数据来自 OPUS ,训练流程使用 OPUS-MT-train 的流程。

@inproceedings{tiedemann-thottingal-2020-opus,
    title = "{OPUS}-{MT} {--} Building open translation services for the World",
    author = {Tiedemann, J{\"o}rg  and Thottingal, Santhosh},
    booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
    month = nov,
    year = "2020",
    address = "Lisboa, Portugal",
    publisher = "European Association for Machine Translation",
    url = "https://aclanthology.org/2020.eamt-1.61",
    pages = "479--480",
}

@inproceedings{tiedemann-2020-tatoeba,
    title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
    author = {Tiedemann, J{\"o}rg},
    booktitle = "Proceedings of the Fifth Conference on Machine Translation",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.wmt-1.139",
    pages = "1174--1182",
}

模型信息

用法

简短示例代码:

from transformers import MarianMTModel, MarianTokenizer

src_text = [
    "Скільки мені слід купити пива?",
    "Я клієнтка."
]

model_name = "pytorch-models/opus-mt-tc-big-zle-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))

for t in translated:
    print( tokenizer.decode(t, skip_special_tokens=True) )

# expected output:
#     How much beer should I buy?
#     I'm a client.

您还可以使用transformers pipelines使用OPUS-MT模型,例如:

from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-zle-en")
print(pipe("Скільки мені слід купити пива?"))

# expected output: How much beer should I buy?

基准测试

langpair testset chr-F BLEU #sent #words
bel-eng tatoeba-test-v2021-08-07 0.65221 48.1 2500 18571
rus-eng tatoeba-test-v2021-08-07 0.71452 57.4 19425 147872
ukr-eng tatoeba-test-v2021-08-07 0.71162 56.9 13127 88607
bel-eng flores101-devtest 0.51689 18.1 1012 24721
rus-eng flores101-devtest 0.62581 35.2 1012 24721
ukr-eng flores101-devtest 0.65001 39.2 1012 24721
rus-eng newstest2012 0.63724 39.2 3003 72812
rus-eng newstest2013 0.57641 31.3 3000 64505
rus-eng newstest2014 0.65667 40.5 3003 69190
rus-eng newstest2015 0.61747 36.1 2818 64428
rus-eng newstest2016 0.61414 35.7 2998 69278
rus-eng newstest2017 0.65365 40.8 3001 69025
rus-eng newstest2018 0.61386 35.2 3000 71291
rus-eng newstest2019 0.65476 41.6 2000 42642
rus-eng newstest2020 0.64878 36.9 991 20217
rus-eng newstestB2020 0.65685 39.3 991 20423
rus-eng tico19-test 0.63280 33.3 2100 56323

致谢

该工作得到 European Language Grid 的支持,作为 pilot project 2866 的一部分,受欧洲研究委员会(ERC)在欧洲联盟的Horizon 2020研究和创新计划(授权协议号码771113)下的欧洲研究委员会(ERC)的资助,以及 FoTran project 的支持,受欧洲联盟的Horizon 2020研究和创新计划(授权协议号码780069)的资助。我们还感谢 CSC -- IT Center for Science 提供的慷慨计算资源和IT基础设施,芬兰。

模型转换信息

  • transformers版本:4.16.2
  • OPUS-MT git哈希:1bdabf7
  • 转换时间:2022年3月23日22:17:11 EET
  • 转换机器:LM0-400-22516.local