DeciLM 6B-Instruct
DeciLM 6B-Instruct is a model for short-form instruction following. It is built by LoRA fine-tuning DeciLM 6B on a subset of the OpenOrca dataset.
- Developed by: Deci
- Model type: DeciLM is an auto-regressive language model using an optimized transformer decoder architecture that includes variable Grouped-Query Attention.
- Language(s) (NLP): English
- License: Llama 2 Community License Agreement with an extention of Deci regarding hosting service providers.
Model Sources
- Paper: DeciLM 6B Technical Blog
- Demo: DeciLM 6B-Instruct Demo
- Notebook: DeciLM 6B-Instruct Notebook
Uses
The model is intended for commercial and research use in English and can be fine-tuned for use in other languages.
How to Get Started with the Model
Use the code below to get started with the model.
# pip install -q transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "Deci/DeciLM-6b-instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, trust_remote_code=True).to(device)
inputs = tokenizer.encode("How do I make french toast? Think through it step by step", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_p=0.95)
print(tokenizer.decode(outputs[0]))
Training Details
DeciLM 6B underwent training utilizing the SlimPijamas dataset, leveraging advanced proprietary methodologies allowing for fast training. DeciLM 6B was further finetuned on a subset of the OpenOrca dataset, giving rise to DeciLM-6B-Instruct.
Evaluation
Below are DeciLM's 6B-instruct evaluation results.
Average | ARC Challenge* | ARC Easy* | BoolQ | HellaSwag* | LAMBDA OpenAI | OpenBookQA | PIQA | TruthfulQA | Winogrande |
---|---|---|---|---|---|---|---|---|---|
62.01 | 44.43 | 70.58 | 77.34 | 74.57 | 70.1 | 33 | 77.52 | 43.89 | 67.64 |
Accuracy-norm score* |
Runtime Benchmarks
Inference Tool/Hardware | A10 (tokens/sec) |
---|---|
PyTorch | 652.49 |
Infery LLM | 2,029.6 |
- Throughput (tokens/sec) - Measured with optimal batch - PyTorch BS 64, Infery LLM BS 128
- In order to replicate the results of the PyTorch benchmark, use this code example
Disclaimer
DeciLM 6B-Instruct has not been aligned for safety or trained using RLHF.
How to Cite
Please cite this model using this format.
@misc{DeciFoundationModels,
title = {DeciLM 6B Instruct},
author = {DeciAI Research Team},
year = {2023}
url={[https://huggingface.co/Deci/DeciLM-6b-instruct](https://huggingface.co/Deci/DeciLM-6b-instruct)},
}
- Downloads last month
- 298
Inference API does not yet support model repos that contain custom code.
Datasets used to train Deci/DeciLM-6b-instruct
Spaces using Deci/DeciLM-6b-instruct 4
Evaluation results
- ARC Challenge on ai2_arcself-reported43.430
- ARC Easy on ai2_arcself-reported70.580
- BoolQ on boolqself-reported77.340
- HellaSwag on hellaswagself-reported74.570
- LAMBDA on OpenAI LAMBDAself-reported70.100
- OpenBookQA on openbookqaself-reported33.000
- PIQA on piqaself-reported77.520
- TruthfulQA on truthful_qaself-reported43.890
- Winogrande on winograndeself-reported67.640