Mixtral-8x22B is the latest and largest mixture of expert large language model (LLM) from Mistral AI with state of the art machine learning model using a mixture 8 of experts (MoE) 22b models.
The maximum number of tokens to generate. Shorter token lengths will provide faster performance.
A decimal number that determines the degree of randomness in the response
An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass.
The top-k parameter limits the model's predictions to the top k most probable tokens at each step of generation.
ResetGenerate
Output
Submit a prompt for a response.
Notes
Introduction
Mixtral 8x22B is a high-quality Sparse Mixture-of-Experts (SMoE) model developed by Mistral AI. Mixtral 8x22B represents a significant advancement in open models, offering novel capabilities and improved cost-performance trade-offs.
Mixtral 8x22B Model
Mixtral-8x22B is the latest and largest mixture of expert large language model (LLM) from Mistral AI. This is state of the art machine learning model using a mixture 8 of experts (MoE) 22b models. During inference 2 expers are selected. This architecture allows large models to be fast and cheap at inference. This model is not instruction tuned.
Key Features:
Sparse Mixture-of-Experts architecture
Efficiently outperforms Llama 2 70B and GPT3.5 on various benchmarks
Gracefully handles a context of 32k tokens
Multilingual support: English, French, Italian, German, and Spanish
Strong performance in code generation
Run Mixtral 8x22B with an API
Running the API with Clarifai's Python SDK
You can run the Mixtral 8x22B Model API using Clarifai’s Python SDK.
Export your PAT as an environment variable. Then, import and initialize the API Client.
from clarifai.client.model import Model
prompt ="What’s the future of AI?"inference_params =dict(temperature=0.7, max_tokens=200, top_k =50, top_p=0.95)# Model Predictmodel_prediction = Model("https://clarifai.com/mistralai/completion/models/mixtral-8x22B").predict_by_bytes(prompt.encode(), input_type="text", inference_params=inference_params)print(model_prediction.outputs[0].data.text.raw)
You can also run Mixtral 8x22B API using other Clarifai Client Libraries like Java, cURL, NodeJS, PHP, etc here.
Aliases: Mixtral, mixtral-8*22b, mixtral-8x22b,
Use Cases
Mixtral 8x22B excels in various applications, including but not limited to:
Natural Language Processing tasks
Multilingual applications
Code generation
Instruction-following models
Advantages
Open-weight model with a permissive Apache 2.0 license
Best overall model in terms of cost/performance trade-offs
Matches or outperforms GPT3.5 on standard benchmarks
Disclaimer
Please be advised that this model utilizes wrapped Artificial Intelligence (AI) provided by TogetherAI (the "Vendor"). These AI models may collect, process, and store data as part of their operations. By using our website and accessing these AI models, you hereby consent to the data practices of the Vendor. We do not have control over the data collection, processing, and storage practices of the Vendor. Therefore, we cannot be held responsible or liable for any data handling practices, data loss, or breaches that may occur. It is your responsibility to review the privacy policies and terms of service of the Vendor to understand their data practices. You can access the Vendor's privacy policy and terms of service at https://www.togetherai.com/legal/privacy-policy.
We disclaim all liability with respect to the actions or omissions of the Vendor, and we encourage you to exercise caution and to ensure that you are comfortable with these practices before utilizing the AI models hosted on our site.
ID
Model Type ID
Text To Text
Input Type
text
Output Type
text
Description
Mixtral-8x22B is the latest and largest mixture of expert large language model (LLM) from Mistral AI with state of the art machine learning model using a mixture 8 of experts (MoE) 22b models.