New
Clarifai is recognized as a Leader in The Forrester Wave™: Computer Vision Tools, Q1 2024
October 18, 2023

Run Zephyr 7B with an API

Table of Contents:
Run Zephyr 7B with an API

Zephyr-7B-alpha is a new open-source language model from HuggingFace and is based on Mistral-7B. This model surpasses Llama 2 70B Chat on the MT Bench.

You can now try out zephyr-7B-alpha in the Clarifai Platform and access it through the API.

Table of Contents

  1. Introduction
  2. Prompt Template
  3. Running Zephyr 7B with Python
  4. Running Zephyr 7B with Javascript
  5. Best Use cases
  6. Limitations

Introduction

Zephyr-7B-alpha is the first model in the Zephyr series and is based on Mistral-7B. It has been fine-tuned using Direct Preference Optimization (DPO) on a mix of publicly available and synthetic datasets. Notably, the in-built alignment of these datasets was removed to boost performance on the MT Bench and make the model more helpful.

Prompt Template

To interact effectively with the Zephyr-7B-alpha model, use the prompt template below.

<|system|>

{system_prompt}</s>

<|user|>

{prompt}</s>

<|assistant|>

Here's an example of how to use the prompt template:

<|system|>
You are a friendly chatbot who always responds in the style of a pirate.</s>
<|user|>
What's the easiest way to peel all the cloves from a head of garlic?</s>
<|assistant|>

Running Zephyr 7B with Python

You can run Zephyr 7B with our Python SDK with just a few lines of code.

To get started, Signup to Clarifai here and get your Personal Access Token(PAT) under the security section in settings. 

Export your PAT as an environment variable:

export CLARIFAI_PAT={your personal access token}

Check out the Code Below:

Running Zephyr 7B with Javascript

You can also run Zephyr Model using other Clarifai Client Libraries like Java, cURL, NodeJS, PHP, etc here.

Model Demo in the Clarifai Platform:

Try out the zephyr-7B-alpha model here: https://clarifai.com/huggingface-research/zephyr/models/zephyr-7B-alpha

Best Use Cases

Chat applications

The Zephyr-7B-alpha model is well-suited for chat applications. It was initially fine-tuned on a version of the UltraChat dataset, which includes synthetic dialogues generated by ChatGPT. Further refinement was achieved by employing huggingface TRL’s DPOTrainer on the openbmb/UltraFeedback dataset. This dataset contains prompts and model completions ranked by GPT-4. This extensive training process ensures that the model performs exceptionally well in chat applications.

Limitations

Zephyr-7B-alpha has not been aligned to human preferences using techniques like Reinforcement Learning from Human Feedback (RLHF). As a result, it can produce outputs that may be problematic, especially when intentionally prompted.

Keep up to speed with AI

  • Follow us on Twitter X to get the latest from the LLMs

  • Join us in our Discord to talk LLMs