Categories
UX

Moderating User Generated Content

User Generated Content (UGC) is any sort of material created or shared by a visitor or a user on a web site. UGC examples are comments in articles, tweets, new threads in online forums as well as blog posts and replies.

With the advent of the Web 2.0, Web sites allowed users to easily upload and create content leading to numerous opportunities. While this new scenario represented a huge impact in users engagement, it can sometimes be risky or lead to a bad user experience. Individuals can spam, troll or publish any kind of abusive content that can reflect poorly on the brand’s image and, whats more, take the owners in costly lawsuits.

While many apps provide a free atmosphere where mostly anything can be posted, product owners are aware of these potential legal issues and ensure that UGC meets certain standards when new content is being submitted by users.

Content moderation and types

As said before, it is crucial for business and brand reputation that only content that meet certain UGC standards will be posted. Inappropriate content may include profanity, violence, explicit sex, spam messages, scam tentative and the like.

There are three main methods for content moderation in the web:

Automated moderation: using software algorithms. This method is the cheapest and the quickest, but also the least reliable. Malicious users can overlook filtering techniques by slightly modifying the words they use to generate their content.

Community moderation: since the beginning of Web forums, moderation was left to volunteer administrators that ensured content met with the community standards. Nowadays, Web sites like YouTube let users flag content they consider inappropriate for revision.

Human moderation: some companies may hire full time employees for content moderation.

How big web sites moderate content

Social network sites have, in general, strict content restriction policies. Because of this, brands need to create strong strategies to maximize visitors satisfaction and, at the same time, ensure a clean atmosphere by removing offensive content.

Facebook

The social network giant outsources its generated content moderation to teams based in Philippines and India. Philippines, which was formerly a US colony, has strong laces with the American culture, something really valuable to determine which kind of material is considered offensive by Americans. There are more than 15000 moderators working at Facebook worldwide.

The reporting process starts once a user flags a piece of content they consider inappropriate by clicking the Report option. After the user confirms, is then sent to a moderation team that will review the content and if it’s not ignored nor deleted, it will be escalated to a moderation team based in California, who will help with cultural context decisions.

Facebook works closely with organizations such as the Safety Advisory Board and the National Cyber Security Alliance to improve policies and report handling.

Step 1: Report a post as Spam. Step 2: Reinforcing the standards prior submission.
Flow for reporting a Facebook post

YouTube

A few years ago, Google made some improvements in the way they filter offensive content by making the comments section in a similar way as Facebook had done before. The default sorting type is relevance rather than recency. Nundu Janakiram, former Product Manager at YouTube, said this helped in highlighting the most meaningful conversations.

Twitter

Like all the other social network apps, Twitter has a Report feature for every tweet in the user’s feed. Moreover, there’s an option to report the user’s account for policy infringement. This could block the account of the user for a certain period of time until the they delete the abusive tweets.

A “quality filter” also prevents trolling and abusive tweets. This means that when content from a suspicious account is discovered, it will be automatically removed from the Home feed.

Last but not least, there is phone verification that helps to identify genuine users.

Criteria varies for each region

Web sites that serve users globally must be able to understand not only the language in which the UGC is written, but also its context and intent. Acceptable norms may differ from country to country, this means that content that is appropriate in the United States, may be highly rejected in Western cultures.

For example, in certain countries, color red or even the “thumbs up” gesture are inappropriate. The same goes for any picture or video showing a certain degree of skin from a woman. In some parts of the Middle East region, politics and religion are also very sensitive topics to talk about.

Like the middle finger in the UK and USA, the thumbs up gesture is considered rude and offensive in countries like Iran, Greece, Russia, and parts of West Africa.

Conclusion

Content moderation has become a new discipline that requires expertise and effectiveness not only in the detection of inappropriate conducts, but also in user patterns and cross-cultural differences.

While there are many software-based solutions that work seamlessly with algorithms that remove spam and blacklisted words, many of them may fail in detecting slight variations. There is a common feature in apps that are heavily based in User-Generated Content: the ability for any user to report a behavior they consider offensive, letting the site administrators review the conduct and take action in consequence. Concerned about enforcing their content policies, companies like Facebook have their own dedicated team of UGC moderators.

Over the last years, different ways of exploiting good content have been launched: phone verification helps to add relevance to a user’s profile. Sorting by relevance rather than recency by default, is also a great way to highlight good conversations and throw irrelevant content at the bottom of the page.

Further reading

  • Saudi Arabia Business Etiquette & Culture
  • WERSM, George Simons, How Does Facebook Moderate Content?
  • WIRED, Adrian Chen, The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed
  • CNET, Seth Rosenblatt, YouTube gets the yuck out in comments cleanup
  • TNERD, Yasmita, Twitter’s new ‘quality filter’ to prevent abusive tweets and trolling
  • TECHTIMES, Lauren Keating, Twitter Will Lock Trolls Out Of Accounts In New Threats To Prevent Abuse
  • COGNIZANT, How to De-Risk the Creation and Moderation of User-Generated Content
  • WIKIPEDIA, Censorship in Saudi Arabia

Leave a Reply

Your email address will not be published. Required fields are marked *