This workflow is wrapped around the English text moderation model that classifies english text as toxic, insult, obscene, identity_hate and severe_toxic.
Use Workflow
Delete Workflow
Notes
Text moderation Classifier
This workflow is wrapped around the English text moderation model that classifies english text as toxic, insult, obscene, identity_hate and severe_toxic.
English Text Moderation Model
The english text moderation model can analyze text and detect harmful content. It is especially useful for detecting third-party content that may cause problems.
This model returns a list of concepts along with their corresponding probability scores indicating the likelihood that these concepts are present in the text. The list of concepts includes:
from clarifai.client.workflow import Workflow
workflow_url ='https://clarifai.com/clarifai/text-moderation/workflows/english-text-moderation-classifier'text ='I love this movie and i would watch it again and again!'prediction = Workflow(workflow_url).predict_by_bytes(text.encode(), input_type="text")# Get workflow resultsprint(prediction.results[0].outputs[-1].data)
Using Workflow
To utilize the Sentiment Analysis workflow, you can input text through the Blue Plus Try your own Input button and it will that classifies text as toxic, insult, obscene, identity_hate and severe_toxic.
Workflow ID
english-text-moderation-classifier
Description
This workflow is wrapped around the English text moderation model that classifies english text as toxic, insult, obscene, identity_hate and severe_toxic.
Last Updated
Apr 09, 2024
Privacy
PUBLIC
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Help
english-text-moderation-classifier workflow by clarifai | Clarifai - The World's AI