Zero-Shot Classification
Transformers PyTorch Safetensors English deberta-v2 text-classification deberta-v3-large nli natural-language-inference multitask multi-task pipeline extreme-multi-task extreme-mtl tasksource zero-shot rlhf License: apache-2.0
Edit model card

Model Card for DeBERTa-v3-large-tasksource-nli

DeBERTa-v3-large fine-tuned with multi-task learning on 600 tasks of the tasksource collection You can further fine-tune this model to use it for any classification or multiple-choice task. This checkpoint has strong zero-shot validation performance on many tasks (e.g. 77% on WNLI). The untuned model CLS embedding also has strong linear probing performance (90% on MNLI), due to the multitask training.

This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic rlhf, anli... alongside many NLI and classification tasks with a SequenceClassification heads while using only one shared encoder. Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched. The number of examples per task was capped to 64k. The model was trained for 80k steps with a batch size of 384, and a peak learning rate of 2e-5.

tasksource training code: https://colab.research.google.com/drive/1iB4Oxl9_B5W3ZDzXoWJN-olUbqLBxgQS?usp=sharing

Software

https://github.com/sileod/tasksource/
https://github.com/sileod/tasknet/
Training took 6 days on Nvidia A100 40GB GPU.

Citation

More details on this article:

@article{sileo2023tasksource,
  title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
  author={Sileo, Damien},
  url= {https://arxiv.org/abs/2301.05948},
  journal={arXiv preprint arXiv:2301.05948},
  year={2023}
}

Loading a specific classifier

Classifiers for all tasks available. See https://Model Database.co/sileod/deberta-v3-large-tasksource-adapters

Model Card Contact

[email protected]

Downloads last month
2,197
Safetensors
Model size
435M params
Tensor type
I64
·
F32
·
Hosted inference API
This model can be loaded on the Inference API on-demand.

Datasets used to train sileod/deberta-v3-large-tasksource-nli