Mistral-7B-instruct is a state-of-the-art 7.3 billion parameter language model (llm), outperforming Llama2-13B in multiple NLP benchmarks, including code-related challenges
The maximum number of tokens to generate. Shorter token lengths will provide faster performance.
A decimal number that determines the degree of randomness in the response
An alternative to sampling with temperature, where the model considers the results of the tokens with top_p probability mass.
The top-k parameter limits the model's predictions to the top k most probable tokens at each step of generation.
ResetModel loading...
Output
Notes
ID
Model Type ID
Text To Text
Input Type
text
Output Type
text
Description
Mistral-7B-instruct is a state-of-the-art 7.3 billion parameter language model (llm), outperforming Llama2-13B in multiple NLP benchmarks, including code-related challenges