Edit model card

XLM-RoBERTa large model whole word masking finetuned on SQuAD

Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets

Used QA Datasets

SQuAD + SberQuAD

SberQuAD original paper is here! Recommend to read!

Evaluation results

The results obtained are the following (SberQUaD):

f1 = 84.3
exact_match = 65.3
Downloads last month
474
Hosted inference API
This model can be loaded on the Inference API on-demand.

Spaces using AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru 2