mistral-7B-OpenOrca

The Mistral-7B-OpenOrca model is a high-performing large language model(llm) achieved by fine-tuning the Mistral-7B base model using the OpenOrca dataset

Input

Prompt:

Press Ctrl + Enter to submit
The maximum number of tokens to generate. Shorter token lengths will provide faster performance.
A decimal number that determines the degree of randomness in the response

Output

Submit a prompt for a response.

Notes

Note

Instruction format

Mistral-7B-OpenOrca uses OpenAI's Chat Markup Language (ChatML) format, with <|im_start|> and <|im_end|> tokens added to support this. The prompt should be as following:

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Introduction

Mistral-7B-OpenOrca is a language model created by fine-tuning the Mistral-7B base model on the OpenOrca dataset. Notably, it is the first 7B model to outperform all models below 30B, securing a top-ranking position. Additionally, it achieves an impressive 98% of Llama2-70B-chat's performance.

Mistral 7B

Mistral 7B is a SOTA language model with a whopping 7.3 billion parameters. It represents a significant leap in natural language understanding and generation. The model is released under the Apache 2.0 license, allowing its unrestricted usage.

  • Performance Superiority: Mistral-7B surpasses the performance of Llama2-13B on all benchmark tasks and excels on many benchmarks compared to Llama 34B. It also demonstrates competitive performance with CodeLlama-7B on code-related tasks while maintaining proficiency in English language tasks.
  • Versatile Abilities: It excels not only in code-related tasks, approaching CodeLlama 7B performance, but also remains highly proficient in various English language tasks.

Mistral-7B -OpenOrca

Mistral-7B-OpenOrca is a language model fine-tuned on the OpenOrca dataset. The OpenOrca dataset is designed to replicate the dataset generated for Microsoft Research's Orca Paper.

The model is recognized for its impressive performance and is ranked #2 on the HuggingFace Leaderboard among models smaller than 30B at the time of release, surpassing all but one 13B model.

Run Mistral-7B-OpenOrca with an API

You can run the Mistral-7B-OpenOrca Model using Clarifai’s python SDK.

Check out the Code Below:

Export your PAT as an environment variable. Then, import and initialize the API Client.

export CLARIFAI_PAT={your personal access token}
from clarifai.client.model import Model

# Model Predict
model_prediction = Model("https://clarifai.com/mistralai/completion/models/mistral-7B-OpenOrca").predict_by_bytes(b"Write a tweet on future of AI", "text")

You can also run Mistral-7B-OpenOrca API using other Clarifai Client Libraries like Java, cURL, NodeJS, PHP, etc here.

Evaluation

The model's performance is assessed using various evaluation metrics

HuggingFace Leaderboard Performance

Mistral-7B-OpenOrca outperforms the base model, achieving 105% of the base model's performance on HF Leaderboard evaluations, with an average score of 65.33. This performance surpasses all 7B models and most 13B models at the time of release.

AGIEval Performance

The model's performance on AGIEval indicates that it achieves 129% of the base model's performance, with an average score of 0.397. Additionally, it significantly improves upon the official Mistral-7B-Instruct-v0.1 finetuning, achieving 119% of their performance.

BigBench-Hard Performance

Mistral-7B-OpenOrca demonstrates strong performance on BigBench-Hard, achieving 119% of the base model's performance, with an average score of 0.416.

GPT4ALL Leaderboard Performance

The model's performance on the GPT4ALL Leaderboard is highlighted, where it averages a score of 72.38, gaining a slight edge over previous releases and leading the leaderboard.

MT-Bench Performance

Mistral-7B-OpenOrca's performance on MT-Bench, which evaluates model response quality across various challenges, is reported to be on par with Llama2-70b-chat, with an average score of 6.86.

  • ID
  • Model Type ID
    Text To Text
  • Input Type
    text
  • Output Type
    text
  • Description
    The Mistral-7B-OpenOrca model is a high-performing large language model(llm) achieved by fine-tuning the Mistral-7B base model using the OpenOrca dataset
  • Last Updated
    Oct 17, 2024
  • Privacy
    PUBLIC
  • Use Case
  • Toolkit
  • License
  • Share
  • Badge
    mistral-7B-OpenOrca