App Overview

Welcome to image-moderation-de99ac7c04b8 Overview page

Clarifai app is a place for you to organize all of the content including models, workflows, inputs and more.

For app owners, API keys and Collaborators have been moved under App Settings.

image-moderation-de99ac7c04b8
6
6mv0axjixyzu

Image Moderation Template

Overview

Image Moderation Template discusses several image moderation use cases and comes with several ready-to-use image moderation workflows dealing with different use cases, leveraging different Computer Models trained by Clarifai for Image moderation.

Image Moderation

Image moderation involves reviewing and filtering images to ensure they adhere to certain standards or guidelines, typically to prevent inappropriate or harmful content from being displayed.

Image moderation involves reviewing and filtering images to ensure they adhere to certain standards, guidelines, or regulations, typically to identifying and removing images that contain inappropriate, offensive, or harmful content such as violence, nudity, hate speech, or explicit material from websites, social media platforms, or online communities.

Image Moderation using Artificial Intelligence

Leveraging AI, particularly computer vision, to automatically review and filter out inappropriate or harmful images from digital platforms. This process aims to ensure that the content aligns with the platform's community standards or regulatory requirements without the need for extensive manual review.

AI-based image moderation significantly enhances the scalability and efficiency of content moderation processes, enabling real-time filtering and reducing the workload on human moderators.

Moderation Models

Clarifai has trained multiple image moderation CV Models for different use cases. Let's look at each of the use cases:

Moderation Recognition

The image Moderation Classifier model has been designed to moderate nudity, sexually explicit, or otherwise harmful or abusive user-generated content (UGC) imagery. This helps determine whether any given input image meets community and brand standards.

The model outputs a probability distribution among five different labels:

  • • gore - violent images or scenes that show a lot of blood, injuries to the flesh or bones and even cannibalism.
  • • explicit - pictures of sexual acts and/or sexual parts, nudity, or the exposure of the nipples, genitals, buttocks or other taboo areas of the body.
  • • suggestive - Suggestive images are those that portray people who are barely clad in poses that can cause sexual arousal.
  • • drug - substance (other than food) that is used to prevent, diagnose, treat, or relieve symptoms of a disease or abnormal condition.
  • • safe - Images that are safe and meet community and brand standards.

Image Moderation Workflow

image-moderation-classifier: This workflow is wrapped around the Image Moderation Classifier model that classifies the image as nudity, sexually explicit, or otherwise harmful or abusive user-generated content (UGC) imagery.

NSFW-recognition

NSFW Recognition model for identifying whether images are safe for viewing (SFW) or not safe for viewing (NSFW) in American workplaces.

The NSFW recognition model predicts the likelihood that an image contains suggestive or sexually explicit nudity. It's a great solution for anyone trying to moderate or filter nudity from their platform automatically. It is limited to nudity-specific use cases.

NSFW-recognition Workflow

nsfw-recognition: This workflow is wrapped around NSFW Recognition Classifier that classifies the images as nudity, sexually-explicit imagery.

Hate-symbol-detection

AI model for detecting regions of images that contain hate symbols.

Hate symbol-detector detects the presence and location of an ADL-recognized hate symbol in any image. The taxonomy for v1 contains two concepts:

  • • Confederate battle flag: The battle flag features a blue cross, edged with a white band on a red field. There are three stars on each arm of the cross and one star in the centre.
  • • Swastika: swastika, equilateral cross with arms bent at right angles, all in the same rotary direction, usually clockwise. The swastika as a symbol of prosperity and good fortune is widely distributed throughout the ancient and modern world.

Hate-symbol-detection Workflow

hate-symbol-detector: This workflow is wrapped around Hate symbol-detector that detects the ADL-recognized hate symbol.

Image Moderation-all

Image Moderation Classifier V2 model has been designed to moderate nudity, sexually explicit, or otherwise harmful or abusive user-generated content (UGC) imagery. This helps determine whether any given input image meets community and brand standards.

This model can identify 32 concepts in the following GARM Content Categories:

  • Adult & Explicit Sexual Content
  • Crime & Harmful acts to individuals and Society, Human Right Violations
  • Death, Injury or Military Conflict
  • Illegal Drugs / Tobacco/e-cigarettes / Vaping / Alcohol

Image Moderation-all Workflow

image-moderation-classifier-v2: This workflow is a wrapper around the Image Moderation Classifier that classifies the 32 concepts in GARM Content Categories.

Image Moderation using a Multimodal model

Multimodal models like Claude-3-opus, GPT-4-vision, Gemini, etc could be used for image moderation since these models accept both image and text as input. 

Multiple Multimodal models are available on Clarifai, but for image moderation use cases, Claude-3-opus seems to work better than others. Some other models did not work well on image moderation use cases due to a moderation/safety violation.

Run Claude-3 Opus with an API

You can run the Claude-3 Opus Model API using Clarifai’s Python SDK.

Export your PAT as an environment variable. Then, import and initialize the API Client.

Find your PAT in your security settings.

export CLARIFAI_PAT={your personal access token}

Predict via Image URL

from clarifai.client.model import Model
from clarifai.client.input import Inputs

prompt = '''Act as a content moderator that classify the image content as safe or unsafe.
Classify the image in the following category:
1. Drug
2. Explicit
3. Gore
4. Suggestive
5. Safe'''
image_url = "https://samples.clarifai.com/metro-north.jpg"

inference_params = dict(temperature=0.2, max_tokens=100, top_k=50, system_prompt="You're the helpful assistent.")

model_prediction = Model("https://clarifai.com/anthropic/completion/models/claude-3-opus").predict(inputs = [Inputs.get_multimodal_input(input_id="",image_url=image_url, raw_text=prompt)],inference_params=inference_params)

print(model_prediction.outputs[0].data.text.raw)

You can also run Claude-3 Opus API using other Clarifai Client Libraries like Java, cURL, NodeJS, PHP, etc here.

.

  • Description
    Image Moderation Template provides diverse AI-powered workflows for automatically filtering and categorizing inappropriate or harmful images based on various criteria.
  • Base Workflow
  • Last Updated
    Aug 06, 2024
  • Default Language
    en
  • Share