Conversational
Conversational response modelling is the task of generating conversational text that is relevant, coherent and knowledgable given a prompt. These models have applications in chatbots, and as a part of voice assistants
Input
Hey my name is Julien! How are you?
Answer
Hi Julien! My name is Julia! I am well.
About Conversational
Use Cases
Chatbot π¬
Chatbots are used to have conversations instead of providing direct contact with a live human. They are used to provide customer service, sales, and can even be used to play games (see ELIZA from 1966 for one of the earliest examples).
Voice Assistants ποΈ
Conversational response models are used as part of voice assistants to provide appropriate responses to voice based queries.
Inference
You can infer with Conversational models with the π€ Transformers library using the conversational
pipeline. This pipeline takes a conversation prompt or a list of conversations and generates responses for each prompt. The models that this pipeline can use are models that have been fine-tuned on a multi-turn conversational task (see https://Model Database.co/models?filter=conversational for a list of updated Conversational models).
from transformers import pipeline, Conversation
converse = pipeline("conversational")
conversation_1 = Conversation("Going to the movies tonight - any suggestions?")
conversation_2 = Conversation("What's the last book you have read?")
converse([conversation_1, conversation_2])
## Output:
## Conversation 1
## user >> Going to the movies tonight - any suggestions?
## bot >> The Big Lebowski ,
## Conversation 2
## user >> What's the last book you have read?
## bot >> The Last Question
You can use Model Database.js to infer with conversational models on Model Database Hub.
import { HfInference } from "@Model Database/inference";
const inference = new HfInference(HF_ACCESS_TOKEN);
await inference.conversational({
model: 'facebook/blenderbot-400M-distill',
inputs: "Going to the movies tonight - any suggestions?"
})
Useful Resources
- Learn how ChatGPT and InstructGPT work in this blog: Illustrating Reinforcement Learning from Human Feedback (RLHF)
- Reinforcement Learning from Human Feedback From Zero to ChatGPT
- A guide on Dialog Agents
This page was made possible thanks to the efforts of Viraat Aryabumi.
Compatible libraries
Note A faster and smaller model than the famous BERT model.
Note DialoGPT is a large-scale pretrained dialogue response generation model for multiturn conversations.
Note A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.
Note ConvAI is a dataset of human-to-bot conversations labeled for quality. This data can be used to train a metric for evaluating dialogue systems
Note EmpatheticDialogues, is a dataset of 25k conversations grounded in emotional situations
Note A chatbot based on Blender model.
- bleu
- BLEU score is calculated by counting the number of shared single or subsequent tokens between the generated sequence and the reference. Subsequent n tokens are called βn-gramsβ. Unigram refers to a single token while bi-gram refers to token pairs and n-grams refer to n subsequent tokens. The score ranges from 0 to 1, where 1 means the translation perfectly matched and 0 did not match at all