Edit model card

VMware/open-llama-0.3T-7B-open-instruct-v1.1


UPDATE: Final Version Now Available!

Please use the final version: Open LLaMA 7B Open Instruct


License

Nomenclature

  • Model : Open-llama
  • Model trained on : 300B or 0.3 T tokens
  • Model Size: 7B parameters
  • Dataset: Open-instruct-v1.1 (oasst,dolly, hhrlhf)
  • Version: V1 (Alpaca Prompt template)

Use in Transformers

import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = 'VMware/open-llama-0.3T-7B-open-instruct-v1.1'


tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype= torch.float16, device_map = 'sequential')

prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"

prompt=  'Explain in simple terms how the attention mechanism of a transformer model works'


inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")

output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output= tokenizer.decode(output1[0])

print(output)

'''
The attention mechanism of a transformer model is designed to help the model understand the relationship between different parts of a sentence.
The model uses a weighted attention score to determine how much each input token contributes to the output.
The attention score is calculated by looking at the similarity between each input token and the output token,and assigning a weight to each input token based on this similarity.
This way, the model can better understand the relationship between different parts of a sentence and generate more accurate predictions.

'''

Drawbacks

  • The model was trained on a partially trained Open-LLaMA checkpoint (300B tokens or 30% training life cycle), there is a huge potential for improvement when trained on fully trained Open-LLaMA checkpoints
  • From what we have observed, the model strugles with few-shot prompting (We plan on addressing it with future iterations)
  • When asked for code, it may or may not include the code within markdown format (```)
  • It doesn't indent python code

Evaluation

TODO

Downloads last month
24
Hosted inference API
Conversational
Examples
Input a message to start chatting with VMware/open-llama-0.3T-7B-open-instruct-v1.1.
This model can be loaded on the Inference API on-demand.

Dataset used to train VMware/open-llama-0.3T-7B-open-instruct-v1.1