- Community
- Model
- moderation-abuse-korean
moderation-abuse-korean
A text classification/moderation model that classifies Korean text into four concepts: hate speech, offensive language, gender bias, or other bias.
f6fb536be02f4c34a92be44c1093ce55
f6fb536be02f4c34a92be44c1093ce55
Notes
moderation-abuse-korean
Purpose
This model detects hate speech in Korean text. Outputs four concepts:
- "3" : Hate
- "2" : Offensive
- "1" : Gender bias
- "0" : Other bias
Architecture
This is a KcELECTRA model that is finetuned for hate speech detection.
The original KcELECTRA-base model is a pretrained ELECTRA model that is trained from scratch using comments from Naver News.
The dataset used for finetuning:
Intended Use
Multi-class classification of Korean text
Resources
- ID
- Namemoderation-abuse-korean
- Model Type IDText Classifier
- DescriptionA text classification/moderation model that classifies Korean text into four concepts: hate speech, offensive language, gender bias, or other bias.
- Last UpdatedJan 22, 2023
- PrivacyPUBLIC
- Use Case
- Toolkit
- License
- Share
- Badge