September 28, 2023

Run Mistral 7B Instruct with an API

Table of Contents:

Run Mistral 7B with an API

Mistral 7B is a 7.3B parameter model from Mistral AI that outperformed Llama 2 13B on all Benchmarks and Llama 34B on many benchmarks and approaches CodeLlama 7B performance on code.

Contents

  • Mistral 7B Instruct
  • Running Mistral 7B Instruct with Python
  • Mistral 7B Instruct Model Demo
  • Mistral 7B vs Llama 2

Mistral 7B Instruct: Fine-Tuning for Chat

Mistral 7B is finetuned on publicly available instruction datasets resulting in Mistral 7B Instruct Model. It achieves remarkable performance, outperforming all other 7B models on MT-Bench and is comparable to 13B chat models.

Running Mistral 7B Instruct with Python

You can run Mistral 7B Instruct with our Python SDK with just a few lines of code.

To get started, Signup to Clarifai here and get your Personal Access Token(PAT) under security section in settings. 

Check out the Code Below:

import os
os.environ["CLARIFAI_PAT"] = "your personal access token"
from clarifai.client.model import Model
# Model Predict
model_prediction = Model("https://clarifai.com/mistralai/completion/models/mistral-7B-Instruct").predict_by_bytes(b"Write a tweet on future of AI", "text")

You can also run the Mistral 7B Instruct model with the Clarifai Python API Client

Check out the Code Below:

######################################################################################################
# In this section, we set the user authentication, user and app ID, model details, and the URL of
# the text we want as an input. Change these strings to run your own example.
######################################################################################################
# Your PAT (Personal Access Token) can be found in the portal under Authentification
PAT = ''
# Specify the correct user_id/app_id pairings
# Since you're making inferences outside your app's scope
USER_ID = 'mistralai'
APP_ID = 'completion'
# Change these to whatever model and text URL you want to use
MODEL_ID = 'mistral-7B-Instruct'
MODEL_VERSION_ID = 'c27fe1804b38476ca810dd85bd997a3d'
RAW_TEXT = 'Write a tweet on future of AI'
# To use a hosted text file, assign the url variable
# TEXT_FILE_URL = 'https://samples.clarifai.com/negative_sentence_12.txt'
# Or, to use a local text file, assign the url variable
# TEXT_FILE_LOCATION = 'YOUR_TEXT_FILE_LOCATION_HERE'
############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################
from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)
metadata = (('authorization', 'Key ' + PAT),)
userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)
# To use a local text file, uncomment the following lines
# with open(TEXT_FILE_LOCATION, "rb") as f:
# file_bytes = f.read()
post_model_outputs_response = stub.PostModelOutputs(
service_pb2.PostModelOutputsRequest(
user_app_id=userDataObject, # The userDataObject is created in the overview and is required when using a PAT
model_id=MODEL_ID,
version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
raw=RAW_TEXT
# url=TEXT_FILE_URL
# raw=file_bytes
)
)
)
]
),
metadata=metadata
)
if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
print(post_model_outputs_response.status)
raise Exception(f"Post model outputs failed, status: {post_model_outputs_response.status.description}")
# Since we have one input, one output will exist here
output = post_model_outputs_response.outputs[0]
print("Completion:\n")
print(output.data.text.raw)

You can also run Mistral 7B Instruct Model using the other Clarifai Client Libraries like Javascript, Java, cURL, NodeJS, PHP, etc here

Model Demo in the Clarifai Platform:

Try out the Mistral 7B Instruct model here: clarifai.com/mistralai/completion/models/mistral-7B-Instruct

Screenshot 2023-09-29 at 8.19.49 AM

Mistral 7B vs Llama 2

Mistral 7B was evaluated with the performance of various Llama models across a wide range of benchmarks.

  1. Mistral 7B Dominance: Mistral 7B stands out by significantly outperforming Llama 2 13B across all metrics.

  2. Comparable to Llama 34B: Since there is no Llama 2 34B version available Mistral 7B's performance is compared with Llama 34B and it's on par.

  3. Code and Reasoning: Mistral 7B also excels in code-related and reasoning benchmarks and it's approaches CodeLlama 7B performance on code.

Keep up to speed with AI

  • Follow us on Twitter X to get the latest from the LLMs

  • Join us in our Discord to talk LLMs