New
Introducing Compute Orchestration

Armada Predict

High performance, fully managed inference serving

Armada Predict img
I want for up to prediction request a month containing content

Save up to 70% on inference costs with Clarifai

A fully managed model orchestration service associates models to the most efficient compute nodes, scales up and down to maximize your compute while meeting enterprise-grade production volume.

Rapidly deploy models yourself

Use our UI or our SDKs to upload your own model or choose from thousands of the world’s best models in our community. Once a model is uploaded, it’s automatically available for any amount of production traffic. Clarifai solves the ML Ops headaches for you so you can focus on building value. 

1-1
2-2

Optimal GPU usage sharing and battle-tested auto scaling

Our inference orchestration maps models to the most efficient CPUs or GPUs. Our battle-tested endpoints handle massive autoscaling, with options for fully configurable scaling policies. You gain effortless accuracy vs. performance trade-offs for real world applications. 

Best in class evaluation tools

Compare multiple models against each other, or against datasets to easily gauge how your models will perform.

image 9-3
4-2

Combine your models into advanced workflows

Connect one or more AI models and other functional logic together to gain insights beyond what a single AI model could do alone. These workflow engine becomes foundation for more advanced capabilities, including auto data labeling, search indexing and real-time data analysis.

Collect production traffic for evaluation and fine tuning

Clarifai collectors help you understand your traffic patterns over time. They let you monitor and collect user/production data and  identify model performance gaps.  And if you are building custom models, collectors are a critical building block towards active learning because they let you curate datasets to quickly fine-tune your models based on production data.

Review page

Inference orchestration with built-in model optimization

Batch or real-time inference services for trained models with a simple and efficient one-click.  Deployed anywhere.
upload-04
Enterprise grade uptime, industry-leading latency
99.9% Uptime and SLA on your mission critical ML compute. Faster than our competitors. Cloud serving with sub 100ms latencies
users-03-1
Collaborate with AI Lake
Discovery, reuse, share and collaborate with community and Org teams across developers, data scientists and business domain experts.
shield-plus
Deploy anywhere with confidence
Our battle-tested endpoints can be deployed anywhere. On any cloud, on premise or air-gapped environment. We are SOC 2 Type II compliant.
sliders
Model Management and Customizability
Built-in version control, leaderboard, dashboard and streamlit module plug-ins for custom visualization; wide variety of model customizations in a few clicks and drag-n-drop workflows with custom logic
Macbook Pro 16

Technology Partners

Clarifai combines its capabilities with world class technology partners to enable
organizations to solve a broader set of digital transformation challenges.

Clarifai: Your End-to-End AI Solution

Together with our patented AI platform, Clarifai provides a solution including data preparation, model development, and operationalization so you build and deploy models at scale.
Organize and collaborate
AI Lake
Organize and collaborate
Enlight
Enlight
Model training & evaluation
Mesh
Mesh
AI workflows
Spacetime
Spacetime
Data management and vector search

Resources

Create, Train, Get, Update, Delete
Create, Train, Get, Update, Delete
Learn how to create, train, get, update, delete, predict, and search your models
Model Types
Model Types
Learn about some of the most important model types on the Clarifai platform
Deep Fine-Tuning-1
Deep Fine-Tuning
We offer a variety of prebuilt models that are designed to help you create AI solutions quickly.

Simplify your AI deployment! Deploy, serve, and scale models with a single click