Command-R+ is a highly efficient, multilingual, enterprise-grade LLM optimized for real-world business applications, boasting advanced RAG capabilities and a 128k-token context window.
The maximum number of tokens to generate. Shorter token lengths will provide faster performance.
A decimal number that determines the degree of randomness in the response
The top-k parameter limits the model's predictions to the top k most probable tokens at each step of generation.
ResetGenerate
Output
Submit a prompt for a response.
Notes
Introduction
Command-R+ is a cutting-edge large language model (LLM) developed by Cohere. It is the latest addition to the R-series of LLMs, emphasizing high efficiency and strong accuracy to support real-world business applications.
Command-R+ Model
Command R+ is the latest iteration in our R-series of LLMs, specifically engineered to strike an optimal balance between efficiency and accuracy. It boasts a 128k-token context window, setting a new benchmark for LLM capabilities. Key features include:
Features:
• 128k-token context window
• Advanced Retrieval Augmented Generation (RAG) with citation to reduce hallucinations
• Multilingual coverage in 10 key languages
• Tool Use to automate sophisticated business processes
Multilingual Support:
• Offers extensive support for 10 key global business languages, ensuring wide-reaching applicability.
• Languages supported: English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, and Chinese.
• Multilingual capability enables accurate responses from diverse data sources, facilitating global business operations.
Run Command-R+ with an API
Running the API with Clarifai's Python SDK
You can run the Command-R+ Model API using Clarifai’s Python SDK.
Export your PAT as an environment variable. Then, import and initialize the API Client.
from clarifai.client.model import Model
prompt ="What’s the future of AI?"inference_params =dict(temperature=0.2, max_tokens=100, top_k=40, system_prompt ="You are a helpful assistant.")# Model Predictmodel_prediction = Model("https://clarifai.com/cohere/generate/models/command-r-plus").predict_by_bytes(prompt.encode(), input_type="text", inference_params=inference_params)print(model_prediction.outputs[0].data.text.raw)
You can also run Command-R+ API using other Clarifai Client Libraries like Java, cURL, NodeJS, PHP, etc here.
Aliases: command-r-plus, Command-R+, command R+, command R plus
Use Cases
Command R+ is versatile, addressing a broad spectrum of enterprise applications. These include, but are not limited to:
Multilingual customer support automation.
Sophisticated business analytics and reporting.
Content creation and translation across multiple languages..
Evaluation
Command R+ outshines similar models in the scalable market category and remains competitive against more costly alternatives. It performs excellently in essential business capabilities while adhering to strict data privacy and security standards.
Multilingual Evaluation
Comparison of models on FLoRES (in French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, and Chinese) and WMT23 (in German, Japanese, and Chinese) translation tasks.
Multi-step Reasoning
Advantages
Efficiency and Scalability: Optimized for enterprise-grade workloads, offering seamless integration into production environments.
Multilingual Support: Ensures accurate, context-aware responses in 10 key languages, facilitating global business operations.
Cost Reduction: Achieves up to a 57% reduction in tokenization costs for non-English texts, significantly lowering operational expenses.
Limitations
Language Limitations: While covering 10 key languages, global businesses operating in regions with other languages may require additional solutions.
Adaptation Time: Integrating and optimizing the model for specific business processes may require significant initial effort.
ID
Model Type ID
Text To Text
Input Type
text
Output Type
text
Description
Command-R+ is a highly efficient, multilingual, enterprise-grade LLM optimized for real-world business applications, boasting advanced RAG capabilities and a 128k-token context window.