zephyr-7B-alpha

Zephyr is a 7 billion parameter llm, fine-tuned on Mistral-7b outperform Llama 2 70B Chat on MT Bench

Input

Prompt

Press Ctrl + Enter to submit
The maximum number of tokens to generate. Shorter token lengths will provide faster performance.
A decimal number that determines the degree of randomness in the response

Output

Submit a prompt for a response.

Notes

Note

Prompt Template

For utilizing Zephyr-7B-α, a prompt template is provided. This template allows users to interact with the model effectively. Example prompt template:

<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>

Example

<|system|>
 You are a friendly chatbot who always responds in the style of a pirate.</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>

Introduction

Zephyr is a series of language models designed to serve as helpful assistants. Zephyr-7B-α is the first model in this series and represents a fine-tuned version of mistralai/Mistral-7B-v0.1. It was trained on a combination of publicly available and synthetic datasets using Direct Preference Optimization (DPO) to improve its performance. Model outperform Llama 2 70B Chat on MT Bench

Zephyr-7B-α

Zephyr-7B-α is the first model in the Zephyr series and is based on mistralai/Mistral-7B-v0.1. It has been fine-tuned using Direct Preference Optimization (DPO) on a mix of publicly available and synthetic datasets. Notably, the in-built alignment of these datasets was removed to boost performance on the MT Bench and make the model more helpful.

Run Zephyr 7B with an API

You can run the Zephyr 7B Model API using Clarifai’s python SDK.

Export your PAT as an environment variable

**export CLARIFAI_PAT={your personal access token}**

Check out the Code below to run the Model:

import os
                        
from clarifai.client.model import Model
                        
system_message = "You are a friendly chatbot who always responds in the style of a pirate."
prompt = "Write a tweet on future of AI"
                        
prompt_template = f"<|system|> \
{system_message}\
</s>\
<|user|>\
{prompt}</s>\
<|assistant|>"
     
# Model Predict
model_prediction = Model("https://clarifai.com/huggingface-research/zephyr/models/zephyr-7B-alpha").predict_by_bytes(prompt_template.encode(), "text")
print(model_prediction.outputs[0].data.text.raw)

You can also run Zephyr 7B API using other Clarifai Client Libraries like Java, cURL, NodeJS, PHP, etc here.

Use Cases

Zephyr-7B-α was initially fine-tuned on a variant of the UltraChat dataset, which includes synthetic dialogues generated by ChatGPT. Further alignment was achieved using huggingface TRL’s DPOTrainer on the openbmb/UltraFeedback dataset, consisting of prompts and model completions ranked by GPT-4. This allows the model to be used for chat applications.

Limitations

Zephyr-7B-α has not been aligned to human preferences using techniques like Reinforcement Learning from Human Feedback (RLHF), nor has it undergone in-the-loop filtering of responses like ChatGPT. As a result, it can produce outputs that may be problematic, especially when intentionally prompted. It is also unknown what the size and composition of the corpus was used to train the base model (mistralai/Mistral-7B-v0.1), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this

  • ID
  • Model Type ID
    Text To Text
  • Input Type
    text
  • Output Type
    text
  • Description
    Zephyr is a 7 billion parameter llm, fine-tuned on Mistral-7b outperform Llama 2 70B Chat on MT Bench
  • Last Updated
    Oct 26, 2023
  • Privacy
    PUBLIC
  • Use Case
  • License
  • Share
  • Badge
    zephyr-7B-alpha