Content moderation can now be powered by AI. While it's becoming more efficient than ever, it's important to choose the right data to create a safer web experience.
How Machine Learning Optimizes Content Moderation
The internet has over 4.5 billion users and growing, generating billions of images, video, messages, posts, and other content types every day. This content must be regulated in some way, as most of these internet users want to visit their favorite social media platforms or online retailers and have a safe, positive experience. Content moderation is the solution: it removes any data that’s explicit, abusive, fake, scammy, harmful, or not business-friendly.Companies have traditionally relied on people for their content moderation needs, but as usage and content grow, this method is no longer cost-effective or efficient. Organizations are instead investing in machine learning (ML) strategies to create algorithms that moderate content automatically.Content moderation powered by artificial intelligence (AI) enables online enterprises to scale faster and optimize their content moderation in a way that’s more consistent for users. It doesn’t eliminate the need for human moderators (human-in-the-loop), who can still provide ground truth monitoring for accuracy and handle the more contextual, nuanced content concerns. But it does reduce the amount of content moderators need to review, which is a positive: unwanted exposure to harmful content has an adverse impact on mental health. Leaving this difficult task to machines delivers benefits for companies, their employees, and users alike.https://youtu.be/zuJ7v41P0KA
Real World Applications of Content Moderation
Companies use ML-based content moderation for various digital media use cases, from video games to chatbots and chat rooms. Two of the most common applications, though, are social media and online retail.
Social Media
Social media has a content problem. Facebook alone has over two billion users who watch a collective 100 million hours of video and upload over 350 million photos in an average day. Hiring enough people to manually review the amount of content this traffic creates would be incredibly costly and time-intensive. AI eases this burden by automatically checking text, usernames, images, and videos for hate speech, cyberbullying, explicit or harmful content, fake news, and spam. The algorithm can then delete content or users that don’t comply with a company’s terms and conditions.
Online Retail
Content moderation isn’t just limited to social platforms. Online retailers also use content moderation tools to display only quality, business-friendly content to consumers. A hotel booking website, for example, may leverage AI to scan all hotel room images and remove any that violate site rules (e.g., no people can be visible in a photo). Retailers also leverage a combination of ML techniques to achieve the customization they need for their business.
How Does Content Moderation Work?
The content queues and escalation rules for ML-based review systems will vary by company but generally will include AI moderation at either step one, step two, or both:
- Pre-moderation. AI moderates user content before posting. Content categorized as not harmful is then made visible to users. Content deemed to have a high probability of being harmful or not business-friendly is removed. If the AI model has low confidence in its predictions, it will flag the content for human review.
- Post-moderation. Users report harmful content, which AI or a human then reviews. If the AI does the review, it will follow the same workflow described in step one, automatically deleting any content determined to be harmful.
Depending on the type of media, AI uses a variety of ML techniques to make content predictions.
Text
- Natural language processing (NLP): To understand human language, computers rely on NLP. They may use techniques like keyword filtering to identify unfavorable language for removal.
- Sentiment analysis: Context matters on the internet and sentiment analysis helps computers identify tones, such as sarcasm or anger.
- Knowledge bases: Computers can rely on databases of known information to make predictions on which articles are likely fake news or identify common scams.
Image and Video
- Object detection: Image analysis can identify target objects, such as nudity, in images and videos that don’t meet platform standards.
- Scene understanding: Computers are learning to understand the context of what’s happening in a scene, driving more accurate decision-making.
All Data Types
Regardless of data type, companies may use user reputation technology to identify which content they can trust. Computers categorize users with a history of posting spam or explicit content as “non-trusted” and apply greater scrutiny toward any future content they post. Reputation technology also combats fake news: computers are more likely to label content from unreliable news sources as false.Fortunately, content moderation constantly generates new training data. If a computer routes content to a human reviewer, the human will label the content as harmful or not, and then feed that labeled data back to the algorithm to improve future accuracy.
Overcoming the Challenges of Content Moderation
Content moderation poses many challenges to AI models. The sheer volume of content necessitates the creation of speedy models without sacrificing accuracy. The problem with developing an accurate model is the data. There are a limited number of public datasets of content for digital platforms because most of that data is retained as property by the company that collects it.There’s also the issue of language. The internet is global, meaning your content moderation AI must recognize dozens of different languages, plus the social contexts of the cultures that speak them. Language changes over time, so updating your model regularly with new data is essential.There are also inconsistencies around definitions. What does cyberbullying mean? Is a nude statue considered art, or is it explicit? It’s important to keep these definitions consistent within your platform to maintain user trust in the moderation process. Users are creative and constantly evolving their approaches to find loopholes in moderation. To counteract this, you must continuously retrain your model to weed out issues like the latest scam or fake news.Finally, be aware of bias in content moderation. When language or user characteristics are involved, there’s potential for discrimination. Diversifying your training data and teaching your model to understand context will be critical to reducing bias.With all of these challenges, it can seem insurmountable to produce an effective content moderation platform. But success is possible: many organizations turn to third-party vendors to provide sufficient training data, as well as a crowd of global individuals (who speak a variety of languages) to label it. Third-party partners also bring needed expertise in ML-enabled content moderation tools to deliver scalable, efficient models.
Insight from Appen Content Moderation Expert, Justin Adam
At Appen, we rely on our team of experts to help you build cutting-edge models that enable successful content moderation that provides a quality customer experience and improves business ROI. Justin Adam, a program manager, overseeing several content moderation related projects, is one of our team’s leading experts in ensuring customer success when implementing and improving content moderation with machine learning. Justin’s top three insights on successful content moderation projects include:
- Update Policy as the Real World Dictates: Every content moderation decision should follow the defined policy; however, this also necessitates that policy must rapidly evolve to close any gaps, gray areas, or edge cases when they appear, and particularly for sensitive topics. Monitor market-specific content trends, identify policy gaps, provide recommendations, and deploy policy changes to ensure that the data delivered will be based on decisions made by moderators aligned with the latest and most comprehensive policy guidance.
- Manage Demographic Bias: Content moderation is most effective, reliable, and trustworthy when the pool of moderators is representative of the general population of the market being moderated. It’s important to define the demographics required and handle all aspects of diversity sourcing so that the data feeding into your model is not subject to a demographic bias.
- Develop a Quality Management Strategy and Expert Resources to Support It: Content moderation decisions are susceptible to scrutiny in today’s political climate. Effectively identifying, correcting, and, most importantly, preventing errors requires a comprehensive strategy. We often recommend and can help implement an appropriate strategy based on our client’s specific needs, including developing a full team of trained policy subject matter experts, establishing quality control review hierarchies, and tailored quality analysis and reporting.
What Appen Can Do For You
With over 20 years of experience in helping companies build and launch AI models, we’re proud to offer comprehensive data classification pipelines for your content moderation needs. Our proprietary quality control technologies deliver high accuracy and precision, guided by our expertise and platform features to help you achieve quick delivery and scalability.Learn more about our expertise and how we can help with your specific content moderation needs.