tinyroberta-squad2
This is the distilled version of the deepset/roberta-base-squad2 model. This model has a comparable prediction quality and runs at twice the speed of the base model.
Overview
Language model: tinyroberta-squad2
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See an example QA pipeline on Haystack
Infrastructure: 4x Tesla v100
Hyperparameters
batch_size = 96
n_epochs = 4
base_LM_model = "deepset/tinyroberta-squad2-step1"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride = 128
max_query_length = 64
distillation_loss_weight = 0.75
temperature = 1.5
teacher = "deepset/robert-large-squad2"
Distillation
This model was distilled using the TinyBERT approach described in this paper and implemented in haystack. Firstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in deepset/tinyroberta-6l-768d. Secondly, we have performed task-specific distillation with deepset/roberta-base-squad2 as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with deepset/roberta-large-squad2 as the teacher for prediction layer distillation.
Usage
In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:
reader = FARMReader(model_name_or_path="deepset/tinyroberta-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/tinyroberta-squad2")
In Transformers
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/tinyroberta-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Performance
Evaluated on the SQuAD 2.0 dev set with the official eval script.
"exact": 78.69114798281817,
"f1": 81.9198998536977,
"total": 11873,
"HasAns_exact": 76.19770580296895,
"HasAns_f1": 82.66446878592329,
"HasAns_total": 5928,
"NoAns_exact": 81.17746005046257,
"NoAns_f1": 81.17746005046257,
"NoAns_total": 5945
Authors
Branden Chan: [email protected]
Timo Möller: [email protected]
Malte Pietsch: [email protected]
Tanay Soni: [email protected]
Michel Bartels: [email protected]
About us
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- roberta-base-squad2
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
Get in touch and join the Haystack community
For more info on Haystack, visit our GitHub repo and Documentation.
We also have a Discord community open to everyone!
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
- Downloads last month
- 107,296
Dataset used to train deepset/tinyroberta-squad2
Spaces using deepset/tinyroberta-squad2 21
Evaluation results
- Exact Match on squad_v2validation set verified78.863
- F1 on squad_v2validation set verified82.035