模型:
uer/pegasus-base-chinese-cluecorpussmall
该模型由 UER-py 进行预训练,其介绍见 this paper 。
你可以从 UER-py Modelzoo page 下载一套中文PEGASUS模型,或者通过以下链接从HuggingFace获取:
| Link | |
|---|---|
| PEGASUS-Base | 1238321 |
| PEGASUS-Large | 1239321 |
你可以直接使用此模型进行文本生成的流程:
>>> from transformers import BertTokenizer, PegasusForConditionalGeneration, Text2TextGenerationPipeline
>>> tokenizer = BertTokenizer.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
>>> model = PegasusForConditionalGeneration.from_pretrained("uer/pegasus-base-chinese-cluecorpussmall")
>>> text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
>>> text2text_generator("内容丰富、版式设计考究、图片华丽、印制精美。[MASK]纸箱内还放了充气袋用于保护。", max_length=50, do_sample=False)
[{'generated_text': '书 的 质 量 很 好 。'}]
CLUECorpusSmall 用作训练数据。
该模型由 UER-py 在 Tencent Cloud 上进行预训练。我们以512的序列长度进行1000000个步骤的预训练。以PEGASUS-Base为例:
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_pegasus_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--data_processor gsg --sentence_selection_strategy random
python3 pretrain.py --dataset_path cluecorpussmall_pegasus_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/pegasus/base_config.json \
--output_model_path models/cluecorpussmall_pegasus_base_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 8
最后,我们将预训练模型转换为Huggingface的格式:
python3 scripts/convert_pegasus_from_uer_to_huggingface.py --input_model_path cluecorpussmall_pegasus_base_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 12
@inproceedings{zhang2020pegasus,
title={Pegasus: Pre-training with extracted gap-sentences for abstractive summarization},
author={Zhang, Jingqing and Zhao, Yao and Saleh, Mohammad and Liu, Peter},
booktitle={International Conference on Machine Learning},
pages={11328--11339},
year={2020},
organization={PMLR}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}