模型:

vinai/phobert-base-v2

英文
目录
  • 介绍
  • 使用 transformers 的 PhoBERT
    • 安装
    • 预训练模型
    • 示例用法
  • 使用 fairseq 的 PhoBERT
  • 注意事项
  • PhoBERT: 预训练的越南语语言模型

    预训练的 PhoBERT 模型是越南语的最先进的语言模型( Pho ,即"Phở",是越南的一种流行食物):

    • "base" 和 "large" 两个版本的 PhoBERT 是首个公开的面向越南语的大规模单语言模型预训练模型。PhoBERT 的预训练方法基于 RoBERTa ,它优化了 BERT 以获得更强的性能。
    • PhoBERT 在四个越南语下游自然语言处理任务中,包括词性标注、依存句法分析、命名实体识别和自然语言推断上,均超过了先前的单语和多语言方法,达到了新的最先进性能。

    PhoBERT 的总体架构和实验结果可在我们的 paper 中找到:

    @inproceedings{phobert,
    title     = {{PhoBERT: Pre-trained language models for Vietnamese}},
    author    = {Dat Quoc Nguyen and Anh Tuan Nguyen},
    booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
    year      = {2020},
    pages     = {1037--1042}
    }
    

    当使用 PhoBERT 帮助生成已发表结果或整合到其他软件时,请引用我们的论文。

    使用 transformer 的 PhoBERT

    安装

    • 使用pip安装 transformers:pip install transformers,或者 install transformers from source 。请注意,我们已将 PhoBERT 的缓慢的分词器合并到了主要的 transformers 分支中。关于合并快速分词器的讨论可以在 this pull request 中找到。如果用户希望使用快速分词器,则可以按照以下方式安装 transformers:
    git clone --single-branch --branch fast_tokenizers_BARTpho_PhoBERT_BERTweet https://github.com/datquocnguyen/transformers.git
    cd transformers
    pip3 install -e .
    
    • 使用pip安装 tokenizers:pip3 install tokenizers

    预训练模型

    Model #params Arch. Max length Pre-training data
    vinai/phobert-base 135M base 256 20GB of Wikipedia and News texts
    vinai/phobert-large 370M large 256 20GB of Wikipedia and News texts
    vinai/phobert-base-v2 135M base 256 20GB of Wikipedia and News texts + 120GB of texts from OSCAR-2301

    示例用法

    import torch
    from transformers import AutoModel, AutoTokenizer
    
    phobert = AutoModel.from_pretrained("vinai/phobert-base-v2")
    tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base-v2")
    
    # INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
    sentence = 'Chúng_tôi là những nghiên_cứu_viên .'  
    
    input_ids = torch.tensor([tokenizer.encode(sentence)])
    
    with torch.no_grad():
        features = phobert(input_ids)  # Models outputs are now tuples
    
    ## With TensorFlow 2.0+:
    # from transformers import TFAutoModel
    # phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
    

    使用 fairseq 的 PhoBERT

    请参阅 HERE 获取详细信息!

    注意事项

    如果输入文本是原始的(即没有分词),则在输入传递给 PhoBERT 之前必须应用词分割器以生成分词后的文本。由于 PhoBERT 在预处理数据(包括字形、词和句子分割)中采用了 RDRSegmenter ,因此建议在处理基于 PhoBERT 的下游应用程序时,对于输入的原始文本也使用相同的词分割器。

    安装
    pip install py_vncorenlp
    
    示例用法
    import py_vncorenlp
    
    # Automatically download VnCoreNLP components from the original repository
    # and save them in some local machine folder
    py_vncorenlp.download_model(save_dir='/absolute/path/to/vncorenlp')
    
    # Load the word and sentence segmentation component
    rdrsegmenter = py_vncorenlp.VnCoreNLP(annotators=["wseg"], save_dir='/absolute/path/to/vncorenlp')
    
    text = "Ông Nguyễn Khắc Chúc  đang làm việc tại Đại học Quốc gia Hà Nội. Bà Lan, vợ ông Chúc, cũng làm việc tại đây."
    
    output = rdrsegmenter.word_segment(text)
    
    print(output)
    # ['Ông Nguyễn_Khắc_Chúc đang làm_việc tại Đại_học Quốc_gia Hà_Nội .', 'Bà Lan , vợ ông Chúc , cũng làm_việc tại đây .']
    

    许可证

    MIT License
    
    Copyright (c) 2020 VinAI Research
    
    Permission is hereby granted, free of charge, to any person obtaining a copy
    of this software and associated documentation files (the "Software"), to deal
    in the Software without restriction, including without limitation the rights
    to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    copies of the Software, and to permit persons to whom the Software is
    furnished to do so, subject to the following conditions:
    
    The above copyright notice and this permission notice shall be included in all
    copies or substantial portions of the Software.
    
    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
    SOFTWARE.