模型:

microsoft/DialogRPT-human-vs-rand

英文

演示

请尝试这个 ➤➤➤ Colab Notebook Demo (click me!)

Context Response human_vs_rand score
I love NLP! He is a great basketball player. 0.027
I love NLP! Can you tell me how it works? 0.754
I love NLP! Me too! 0.631

human_vs_rand 分数预测回应与给定上下文相关的可能性,而不是随机回应的可能性。

对话RPT-human-vs-rand

对话排序预训练变压器

对话回应被点赞的可能性有多大?有多大可能得到回复?

这就是 DialogRPT 学习预测的内容。它是由 Microsoft Research NLP Group 提出的一组对话回应排序模型,训练数据包括1亿多条人类反馈数据。它可以用于通过重新排序生成的回应候选项来改进现有的对话生成模型(例如 DialoGPT )。

快速链接:

我们考虑了以下任务,并提供了相应的预训练模型。

Task Description Pretrained model
Human feedback given a context and its two human responses, predict...
updown ... which gets more upvotes? 1238321
width ... which gets more direct replies? 1239321
depth ... which gets longer follow-up thread? 12310321
Human-like (human vs fake) given a context and one human response, distinguish it with...
human_vs_rand ... a random human response this model
human_vs_machine ... a machine generated response 12311321

联系方式:

请在 our repo 上创建一个问题

引用:

@inproceedings{gao2020dialogrpt,
    title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
    author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
    year={2020},
    booktitle={EMNLP}
}