User generated content (UGC) is not just found on social media sites anymore. More and more companies in different industries are using user generated content to drive revenue and build brand thought-leadership and loyalty. Automotive companies, restaurants, travel and e-commerce sites are all leveraging UGC to persuade potential customers to choose them. A recent study recorded 85% of people trust user generated content over company generated content. Which supports the importance of moderating UGC.
Why content moderation?
Not all UGC is about images and video. Many times, inappropriate language is used. Abusive language is often reported on the internet, and that’s why it’s important to screen for it. User generated image, video and text content can’t simply be posted to a website or social media page. It has to be monitored and reviewed first. Think of what could happen to your brand and your community if something illicit was posted. Not only would your reputation be damaged, your community members would also be affected, and you open yourself up to liability issues. There is some content, for example child sexual exploitation material (CSEM), that whether you know you are in possession or not, is illegal to have on your servers. Also there are mental hazards to human moderators of seeing this toxic, inappropriate content on a day to day basis. Lawsuits have been brought over moderators experiencing PTSD from their daily job routines. For these reasons, content moderation is becoming a critical function in so many businesses with an online presence.
...so what type of content moderation do I choose?
There are 5 common types of content moderation. The 6th being not moderating content at all, which is never a good idea and can send your community into a tailspin. The other types will definitely maintain a sense of order within your community.
Pre-moderation
It’s precisely that—content moderated before it’s posted on a website. Pre-moderation, by a good moderator, ensures that inappropriate content is flagged and kept from being posted. While it provides high control of what content ends up being displayed on your site, it has many downsides. It delays the content from being published, which in today’s time of instant gratification, people want and expect to see it immediately. This method can be costly as your content scales. It’s ideally for communities with a high level of legal risk, such as celebrity-based or children's’ communities, where protection is vital. For non-conversational or time-sensitive content, such as reviews or photos, it can be used without affecting the community too much.
Post-moderation
This type of content moderation is done after the content is posted. While preferred by users, it can bring up a host of problems for the company. As the community grows, resources needed grow, and costs become a factor. Keeping up with the volumes of content to moderate and unposting quickly can become an issue. From a legal standpoint, the website owner legally becomes the publisher of the content, as each piece of content is viewed and approved or rejected, which can expose them to liability risk.
Reactive Moderation
This type of content moderation puts the responsibility on the user community to flag and report inappropriate content. It can be used alongside pre-and post-content moderation techniques in case anything gets past the moderators. Most of the time, this is used as the sole form of moderation. The main advantage of this moderation type is that you can scale with your community growth without putting extra strain on your moderation resources or increasing costs. You can theoretically avoid responsibility for defamatory or illegal content uploaded by the users, as long as your process for removing content upon notification happens within an acceptable time frame.
Distributed moderation
Less used and prevalent is content moderation that is reliant on the audience. Basically a self moderated approach with a rating system thrown into it. The content is published on the website directly, and then users vote whether the submissions are appropriate with the community guidelines or rules. The users are the ones in control of the comments or posts with some guidance from human moderators.
Automated moderation
Automated content moderation is the most common type of content moderation method. This involves the use of computer vision, natural language processing and AI. Not only can images be moderated, but also textual content and even text within images can be screened. Using AI models, content can be reviewed and filtered automatically—faster and at scale. Inappropriate content can be flagged and prevented from being posted almost instantaneously. All of this can be used to support human moderator’s work in order to speed the process with greater accuracy.