Skip to content

The
Full Stack AI
Generative AI
LLM
Computer Vision
Developer Platform

Build on the fastest, production-grade deep learning platform for developers and ML engineers.

170+
Countries
250k+
Users
1M+
AI Models
Billions
of predictions served
# Note: Install clarifai with `pip install -U clarifai`
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import Model
 
# Model Predict
url = "https://clarifai.com/quantum-synthetics/completion/models/fine-tuned-llm"
prompt = b"Which was our best quarter in 2022 and why?"
fine_tuned_llm = Model(url)
model_prediction = fine_tuned_llm.predict_by_bytes(prompt, "text")
 
for output in model_prediction.outputs:
print(output.data.text.raw)
 
# Output:
# -------
# > Quantum Synthetics Inc., witnessed unprecedented success in 2022's second
#   quarter primarily due to the release of the groundbreaking product, the
#   EcoSphere Purifier. This innovative device, acclaimed for its capability
#   to efficiently transform atmospheric carbon dioxide into oxygen, has not
#   only revolutionized the field of environmental conservation but also 
#   significantly bolstered the company’s reputation in the green technology
#   sector. The EcoSphere Purifier, with its unique design and superior
#   performance, has resonated profoundly with individuals and organizations
#   passionate about ecological sustainability, driving robust sales and
#   reinforcing your company's market presence.

Awards & Recognition

Award winning technology in AI, Machine Learning & Computer  Vision

Build AI Faster
Unlock Value Instantly

Simplify how developers and teams create, share, and run AI at scale
Some of the world’s best teams build with Clarifai
AI Developer Alliance
Bringing developers, companies, and technologies together to share insights, advocate, and bring forth best practices that drive responsible AI innovation.
Frame 9425

Generative AI built for developers by developers

Where developers build production computer vision & LLMs with a Full Stack AI platform.

Build your next generative innovation

Leverage cutting-edge Large Language Models (LLMs) to craft coherent and contextually rich content, connect image generation models to create detailed visuals, utilize image captioning models for nuanced, descriptive narratives, and employ speech generation to render lifelike voice outputs. Harness the automation of workflows to seamlessly interlink generative models, enabling effortless innovation and customization.

# Note: Install clarifai with `pip install -U clarifai`
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import Model
 
# Model Predict
llm_url = "https://clarifai.com/clarifai/completion/models/llm"
prompt = b"In 2 lines, summarize why the sky is blue."
llm = Model(llm_url)
model_prediction = llm.predict_by_bytes(prompt, "text")
 
for output in model_prediction.outputs:
print(output.data.text.raw)
 
Output:
-------
> The sky is blue because sunlight hits the atmosphere and the blue
  wavelengths are scattered more than other colors. This scattering
  makes the sky appear blue from the ground
# Note: Install clarifai with `pip install -U clarifai`
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import Model
 
# Model Predict
url="https://clarifai.com/stability-ai//models/stable-diffusion-xl"
prompt = b"A penguin watching a sunset"
image_generator = Model(url)
model_prediction = image_generator.predict_by_bytes(prompt, "text")
 
# Since we have one input, one output will exist here
output = model_prediction.outputs[0].data.image.base64
 
image_filename = f"gen-image.jpg"
with open(image_filename, 'wb') as f:
      f.write(output)
# Note: Install clarifai with `pip install -U clarifai`
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import Model
 
# Caption Model To Use
url = "https://clarifai.com/salesforce/blip/models/image-caption"
caption_model = Model(url)
 
# Image to caption
image_url = "https://samples.clarifai.com/caption-egg-basket.jpg"
model_prediction = caption_model.predict_by_url(url=image_url,
                                    input_type="image")
for output in model_prediction.outputs:
print(output.data.text.raw)
 
# Output:
# -------
# > a photograph of a basket of eggs in a basket on a wooden tablev

Build RAG in four lines of code

Set up a seamless server-side Retrieval Augmented Generation (RAG) experience in four lines of code with our Python SDK. We make it easy for you to choose your LLM, create, index, and store embeddings, and build your chat interface effortlessly, enabling your instant RAG solution. Start with our defaults and customize as you gain experience. 

# Note: Install clarifai with `pip install -U clarifai`
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import RAG
 
rag_agent = RAG.setup(user_id="USER_ID")
rag_agent.upload(folder_path="~/docs")
rag_agent.chat(messages=[{"role":"human", "content":"What is Clarifai"}])

Inspect Data with Advanced Model Analysis

Use advanced classification models to meticulously categorize and analyze data, enabling swift and accurate decision-making. Employ detection models to identify and locate objects, people, and more within images and videos, providing rich, detailed insights. Harness segmentation models to delineate and differentiate between various elements within an image, facilitating nuanced understanding and analysis. If we don’t have a pre-trained model to suit your needs, easily train another using our many architectures built into the platform.

# Note: Install clarifai with `pip install -U clarifai`
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import Model
 
# Classification Model To Use
url = "https://clarifai.com/clarifai/main/general-image-recognition"
 
classify_model = Model(url)
 
# Image to caption
image_url = "https://samples.clarifai.com/metro-north.jpg"
model_prediction = classify_model.predict_by_url(url=image_url,
     input_type="image")
 
for concept in model_prediction.outputs[0].data.concepts:
print(f"{concept.name}: {concept.value}")
 
# Output:
# -------
# train: 0.9996048808097839
# railway: 0.9992978572845459
# subway system: 0.9982557892799377
# station: 0.9980103373527527
# locomotive: 0.9972555041313171
# transportation system: 0.9969767332077026
# travel: 0.9889694452285767
# commuter: 0.9808903932571411
# platform: 0.980640172958374
# light: 0.9741939902305603
# train station: 0.9687928557395935
# blur: 0.9672884345054626
# city: 0.9615078568458557
# road: 0.961391270160675
# urban: 0.960379421710968
# traffic: 0.9599704742431641
# street: 0.9475027918815613
# public: 0.9343006610870361
# tramway: 0.9319851398468018
# business: 0.9295381903648376v
# Note: Install clarifai with `pip install -U clarifai`
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import Model
 
# Classification Model To Use
url = "https://clarifai.com/clarifai/main/general-image-detection"
 
detect_model = Model(url)
 
# Image to caption
image_url = "https://samples.clarifai.com/metro-north.jpg"
model_prediction = detect_model.predict_by_url(url=image_url,
   input_type="image")
 
regions = model_prediction.outputs[0].data.regions
 
for region in regions:
    # Accessing and rounding the bounding box values
    top_row = round(region.region_info.bounding_box.top_row, 3)
    left_col = round(region.region_info.bounding_box.left_col, 3)
    bottom_row = round(region.region_info.bounding_box.bottom_row, 3)
    right_col = round(region.region_info.bounding_box.right_col, 3)
    
    for concept in region.data.concepts:
        # Accessing and rounding the concept value
        name = concept.name
        value = round(concept.value, 4)
 
        print((f"{name}: {value} BBox: {top_row}, {left_col}, "
               f"{bottom_row}, {right_col}"))
 
# Output:
# -------
# Building: 0.9396 BBox: 0.216, 0.002, 0.552, 0.25
# Person: 0.832 BBox: 0.497, 0.647, 0.669, 0.697
# Tree: 0.6977 BBox: 0.392, 0.365, 0.507, 0.511
# Building: 0.6605 BBox: 0.003, 0.305, 0.974, 0.999
# Tree: 0.5274 BBox: 0.378, 0.932, 0.46, 0.998
# Bench: 0.4542 BBox: 0.743, 0.822, 0.987, 0.999
# Land vehicle: 0.4328 BBox: 0.512, 0.61, 0.573, 0.644
# Person: 0.3903 BBox: 0.522, 0.039, 0.586, 0.058
# Train: 0.3745 BBox: 0.471, 0.29, 0.543, 0.472
# Waste container: 0.3713 BBox: 0.539, 0.738, 0.849, 0.893
# Person: 0.3325 BBox: 0.532, 0.072, 0.578, 0.106
# Note: Install clarifai with `pip install -U clarifai`
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import Model
 
# Classification Model To Use
url = "https://clarifai.com/clarifai/main/image-general-segmentation"

segment_model = Model(url)
 
# Image to caption
image_url = "https://samples.clarifai.com/metro-north.jpg"
model_prediction = segment_model.predict_by_url(url=image_url,
                                                input_type="image")
 
regions = model_prediction.outputs[0].data.regions
 
for region in regions:
    for concept in region.data.concepts:
        # The concept's percentage of image covered
        name = concept.name
        value = round(concept.value, 4)
        print((f"{name}: {value}"))
 
# Output:
# -------
# sky-other: 0.2198
# railroad: 0.1943
# platform: 0.1773
# ceiling-other: 0.1658
# building-other: 0.1185
# train: 0.0939
# tree: 0.0098
# person: 0.008
# unlabeled: 0.0077
# wall-concrete: 0.0047
# fence: 0.0001

Organize, share, reuse with AI Lake

Manage your AI applications with Clarifai's intuitive platform. Upload inputs, be it images, text, or videos, and harness them as the foundation to train sophisticated models. Structure your uploaded data as datasets, enabling precise subsets for model training and testing. Define concepts to categorize the classes within detection, classification, and segmentation models. Employ versioning to create and compare multiple iterations of your model, fine-tuning them with varied data to achieve high performance.

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.user import User
client = User(user_id="user_id")
 
# Get all apps
apps = client.list_apps()
 
# This is how you can create an app and dataset
app = client.create_app(app_id="demo_app",
                    base_workflow="Universal")
 
# This is how you can delete the app 
client.delete_app(app_id="app_id")
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.user import User
app = User(user_id="user_id").app(app_id="app_id")
input_obj = app.inputs()
 
# How to upload an input upload from a URL
url = 'https://samples.clarifai.com/metro-north.jpg'
input_obj.upload_from_url(input_id='demo'
                      image_url=url)
 
# How to upload an input from a file
input_obj.upload_from_file(input_id='demo',
                       video_file='demo.mp4')
 
# How to upload an input from raw text
input_obj.upload_text(input_id='demo',
                  raw_text='This is a test')
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.dataset import Dataset
 
# Get a dataset
dataset = Dataset(user_id="user_id", app_id="app_id"
              dataset_id="dataset_id")
 
# How to upload a dataset from a dataset loader
dataset.upload_dataset(task='visual_segmentation', split="train"
                   dataset_loader='coco_segmentation')
 
# How to upload a dataset from a local folder
dataset.upload_from_folder(folder_path='folder_path'
                       input_type='text', labels=True)
 
# How to upload a text dataset from a csv file
dataset.upload_from_csv(csv_path='csv_path', labels=True)
Docs
Docs
Consult Clarifai's Documentation for clear information, guides, and tutorials on using our AI models and workflows effectively.
Community
Community
Access Clarifai's free AI resources. Use our apps, models, and workflows to enrich your applications.
Discord
Discord
Have questions or want to know more about Clarifai? Join our Discord! It's the easiest and fastest way to get support.

Don’t just take our word for it

What developers and clients say about us

Computer Vision and LLM AI Lifecycle Platform

The developer platform for any deep learning use case

Chat with your data

Retrieval Augmented Generation (RAG) enables users to interact conversationally with their own data, using NLP to pull relevant information from datasets. RAG is a two-step process: first, it retrieves documents that are likely to contain the answers, then it generates responses based on the retrieved documents. This creates a chatbot that delivers personalized responses with zero hallucinations.

Facial Recognition

Clarifai's Facial Recognition technology allows for the accurate identification and analysis of human faces. This technology is versatile, aiding in applications such as security, user authentication, and user experience enhancement by quickly and precisely interpreting facial features. Whether it's automating access control or personalizing user interactions, Clarifai provides the tools to integrate facial recognition seamlessly into your applications.

Sentiment analysis

Textual Sentiment Analysis technology interprets and evaluates the emotions conveyed within a body of text. This sophisticated tool is instrumental in understanding user sentiments, allowing for enhanced customer interactions and feedback analysis. By transforming raw text into insightful data, it aids in refining product strategies, improving customer relations, and optimizing overall user experience, helping businesses to respond more effectively to their audience’s needs and preferences.

Speech synthesis 

Speech Synthesis transforms text into natural, lifelike speech, allowing developers to create applications that talk in a human-like voice. This advanced technology enhances user engagement by providing auditory interaction, making information more accessible and interaction more intuitive. Whether it’s for assistive technologies, entertainment, or customer service applications, Speech Synthesis brings versatility to voice-enabled experiences, enabling a more inclusive and interactive future.

Summarization

Summarization distills lengthy texts down to their essential points, providing clear, concise summaries. This advanced tool is invaluable for quickly understanding and conveying key information from extensive documents or content, aiding in efficient knowledge acquisition and decision-making. Whether used for academic research, content creation, or business intelligence, our summarization technology enables users to save time and focus on what truly matters.

Text moderation

Text Moderation identifies and filters inappropriate or harmful text content, ensuring online spaces maintain a positive and safe environment for users. This technology is crucial for businesses and developers aiming to uphold community guidelines and standards across platforms, from social media to forums. By automating content moderation, it allows for a proactive approach to manage and mitigate risks associated with user-generated content.

Translation

Translation technology enables the conversion of text from one language to another with high accuracy, facilitating communication across language barriers. This solution is essential for developers looking to make their content accessible to a global audience, enhancing user understanding and interaction. Whether it’s for customer support, content creation, or multilingual platforms, our translation tools bridge linguistic gaps, fostering inclusivity and connection.

Visual moderation

Visual moderation empowers platforms to detect and filter out inappropriate or harmful visual content, creating a safer online environment. This solution is key for businesses and developers aiming to maintain a positive user experience on their platforms, ranging from social media to community forums. By leveraging image analysis, it proactively moderates content, helping to uphold community standards and protect user well-being.

Trusted by enterprises. Powered by partners.

Advance AI adoption with Clarifai’s network of partners

Build your first Generative app in under five minutes with Clarifai.

AI in 5_badge