Community
Compute
NEW
HELP
Contact Us
Documentation
Quick Start Guide
API Status
Join our Discord Channel
Product Roadmap
Log in
Sign up
t
clarifai
text-moderation
Use Model
Delete Model
workflow-model-0747-e7c4173bd632
--
2f124
6de8bce4821b000f852d3443496
2f124
6de8bce4821b000f852d3443496
Overview
Versions
Run
Template
<s>[INST] Assess the sentiment of the following message. Classify the sentiment as 'Toxic', 'Suspicious', or 'Safe' based on the tone and content. A 'Toxic' sentiment includes aggression, hostility, or undue negativity. Text:
{data.text.raw}
[/INST]
Output
Parsed
Input Text
Notes
--
ID
Model Type ID
prompter
Description
--
Last Updated
Apr 09, 2024
Privacy
PUBLIC
License
Share
Badge