Workflow uses Mistral-7b Model with specified prompt template for Toxicity moderation that identify and filter out 'Toxic' sentiment includes aggression,
Use Workflow
Delete Workflow
Notes
Toxicity Moderation
Workflow uses Mistral-7b Model with specified prompt template for Toxicity moderation that identify and filter out 'Toxic' sentiment includes aggression, hostility, or undue negativity and classifies the sentiment as 'Toxic', 'Suspicious', or 'Safe' based on the tone and content.
Text Moderation
Large Language Models (LLMs), like GPT (Generative Pretrained Transformer) variants, since they are pre-trained on diverse internet text, enabling them to understand context, nuance, and the subtleties of human language, to identify and filter out inappropriate or harmful content from digital platforms.
How to use the Toxicity Moderation workflow?
Using Clarifai SDK
Export your PAT as an environment variable. Then, import and initialize the API Client.
from clarifai.client.workflow import Workflow
workflow_url ='https://clarifai.com/clarifai/text-moderation/workflows/text-moderation-toxicity-mistral-7b'text ='I love this movie and i would watch it again and again!'prediction = Workflow(workflow_url).predict_by_bytes(text.encode(), input_type="text")# Get workflow resultsprint(prediction.results[0].outputs[-1].data)
Using Workflow
To utilize the Text Moderation workflow, you can input text through the Blue Plus Try your own Input button and it will filter out 'Toxic' sentiment includes aggression, hostility, or undue negativity.
Workflow ID
text-moderation-toxicity-mistral-7b
Description
Workflow uses Mistral-7b Model with specified prompt template for Toxicity moderation that identify and filter out 'Toxic' sentiment includes aggression, hostility, or undue negativity.
Last Updated
Apr 09, 2024
Privacy
PUBLIC
Share
Copy URL
Twitter
Facebook
Reddit
LinkedIn
Email
Help
text-moderation-toxicity-mistral-7b workflow by clarifai | Clarifai - The World's AI