2
moderation-abuse-korean
A text classification/moderation model that classifies Korean text into four concepts: hate speech, offensive language, gender bias, or other bias.
f6fb536be02f4c34a92be44c1093ce55
f6fb536be02f4c34a92be44c1093ce55
Model base cost: $0.0032 / request
Notes
moderation-abuse-korean
Purpose
This model detects hate speech in Korean text. Outputs four concepts:
- "3" : Hate
- "2" : Offensive
- "1" : Gender bias
- "0" : Other bias
Architecture
This is a KcELECTRA model that is finetuned for hate speech detection.
The original KcELECTRA-base model is a pretrained ELECTRA model that is trained from scratch using comments from Naver News.
The dataset used for finetuning:
Intended Use
Multi-class classification of Korean text
Resources
- ID
- Model Type IDText Classifier
- Input Typetext
- Output Typeconcepts
- DescriptionA text classification/moderation model that classifies Korean text into four concepts: hate speech, offensive language, gender bias, or other bias.
- Last UpdatedJan 22, 2023
- PrivacyPUBLIC
- Use Case
text-classification text-moderation - Toolkit
Clarifai HuggingFace - License
None - Share
Copy URL Twitter Facebook Reddit LinkedIn Email
- Badge
Model Version / Description | Status | Training Dataset | Evaluation Dataset | ROC | Created Date | Actions |
---|
Concept | Date |
---|