An image moderation classifier workflow built on a ResNeXt architecture to identify 32 concepts, including GARM categories.

Overview
1

Notes

Image Moderation-all Workflow

This workflow is a wrapper around the Image Moderation Classifier, that designed to moderate nudity, sexually explicit, or otherwise harmful or abusive user-generated content (UGC) imagery. This helps determine whether any given input image meets community and brand standards.

This model can identify 32 concepts in the following GARM Content Categories:

  • Adult & Explicit Sexual Content
  • Crime & Harmful acts to individuals and Society, Human Right Violations
  • Death, Injury or Military Conflict
  • Illegal Drugs / Tobacco/e-cigarettes / Vaping / Alcohol

Run Image Moderation workflow 

Using Clarifai SDK

Export your PAT as an environment variable. Then, import and initialize the API Client.

Find your PAT in your security settings.

export CLARIFAI_PAT={your personal access token}

Prediction with the workflow

from clarifai.client.workflow import Workflow

workflow_url = 'https://clarifai.com/clarifai/image-moderation/workflows/image-moderation-classifier-v2'

image_url = "https://samples.clarifai.com/metro-north.jpg"

prediction = Workflow(workflow_url).predict_by_url(image_url, input_type="image")

# Get workflow results
print(prediction.results[0].outputs[-1].data)

Using Workflow

To utilize the ̌Image Moderation workflow, you can input images through the Blue Plus Try your own Input button and it will output the probability distribution of concepts.

  • Workflow ID
    image-moderation-classifier-v2
  • Description
    An image moderation classifier workflow built on a ResNeXt architecture to identify 32 concepts, including GARM categories.
  • Last Updated
    Apr 01, 2024
  • Privacy
    PUBLIC
  • Share