Community
Log in
Sign up
HELP
Contact Us
Documentation
Quick Start Guide
API Status
Join our Discord Channel
Product Roadmap
Community
Model
workflow-model-0747-e7c4173bd632
Use Model
Delete Model
workflow-model-0747-e7c4173bd632
t
clarifai
text-moderation
--
2f124
6de8bce4821b000f852d3443496
2f124
6de8bce4821b000f852d3443496
Overview
Versions
Run
Template
<s>[INST] Assess the sentiment of the following message. Classify the sentiment as 'Toxic', 'Suspicious', or 'Safe' based on the tone and content. A 'Toxic' sentiment includes aggression, hostility, or undue negativity. Text:
{data.text.raw}
[/INST]
Output
Parsed
Input Text
Notes
--
ID
Name
workflow-model-0747-e7c4173bd632
Model Type ID
prompter
Description
--
Last Updated
Apr 09, 2024
Privacy
PUBLIC
License
Share
Badge