This workflow is wrapped around the Multilingual text moderation model that classifies text as toxic, insult, obscene, identity_hate and severe_toxic.
Use Workflow
Delete Workflow
Notes
Text moderation Classifier
This workflow is wrapped around the Multilingual text moderation model that classifies text as toxic, insult, obscene, identity_hate and severe_toxic.
Multilingual Text Moderation Model
The multilingual text moderation model can analyze text and detect harmful content. It is especially useful for detecting third-party content that may cause problems.
This model returns a list of concepts along with their corresponding probability scores indicating the likelihood that these concepts are present in the text. The list of concepts includes:
toxic
insult
obscene
identity_hate
severe_toxic
threat
Text moderation can be performed in the top 100 languages with the largest Wikipedias:
from clarifai.client.workflow import Workflow
workflow_url ='https://clarifai.com/clarifai/text-moderation/workflows/multilingual-text-moderation-classifier'text ='I love this movie and i would watch it again and again!'prediction = Workflow(workflow_url).predict_by_bytes(text.encode(), input_type="text")# Get workflow resultsprint(prediction.results[0].outputs[-1].data)
Using Workflow
To utilize the Sentiment Analysis workflow, you can input text through the Blue Plus Try your own Input button and it will that classifies text as toxic, insult, obscene, identity_hate and severe_toxic.
Workflow ID
multilingual-text-moderation-classifier
Description
This workflow is wrapped around the Multilingual text moderation model that classifies text as toxic, insult, obscene, identity_hate and severe_toxic.
Last Updated
Apr 09, 2024
Privacy
PUBLIC
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Help
multilingual-text-moderation-classifier workflow by clarifai | Clarifai - The World's AI