October 21, 2021
Portfolio
Unusual

Moderating and maintaining an online community

No items found.
Moderating and maintaining an online communityModerating and maintaining an online community
All posts
Editor's note: 

Thanks to Youtube, Reddit, Twitter, Facebook and so on, most of us are familiar with the important and complex role that moderation plays in massive online communities. There is no such thing as “perfect” community moderation so an ideal solution does not exist. But there are guideposts that can be followed when it comes to thinking through the role of moderation within your content-driven application.


The key thing to consider at the outset is why moderation is essential. To put it simply, it’s because people can be jerks and most people don’t want to be around a lot of jerks. Would you want to go to a town hall discussion about an election topic where any citizen could spout off at the mouth at another, saying vile and disruptive things, without recourse? Nope, me neither. In the analog world, we have policies and procedures to enforce decorum. If you shout down the judge, you’re held in contempt of court and carried away. Online communities require policies that enforce a code of conduct as well.


Here’s a user review left in the app store for the anonymous social app Whisper. It only took a few seconds to find it and is a great example of the downsides of an anonymous identity model and how difficult it is to moderate such platforms.


“A friend of mine recommended this app to me and it’s been great and all — minus all the incredibly narrow minded people on it and the men all older than 25 who really only use this app to send pictures of their body to other people. it’s a great way to spread peoples’ thoughts but without an identity. i think it’s a great idea, but really it’s not being used in the way it should...”


This commenter loves the idea of the app, but it falls apart in practice. Without a moderation model that requires people to use their real identity, it’s hard to hold people accountable for their actions. As a result, the quality of participation erodes and most people opt out of anonymous communities because they inevitably turn ugly. It’s the same reason that most major online publications have turned off comments on their articles.


As the creator of a content community, the question you must answer is “What does quality mean for my product, and how do I enforce it?” I can’t give you that answer since it’s unique to each online community. But I can talk about the various levers at your disposal when it comes to stitching moderation into your product from inception to scale.


From my perspective, there are three methods for moderating behavior within a content application:


  1. Official company policies
  2. Elected/chosen community moderators
  3. Product features + machine learning


Let’s talk about each.

Official company policies

An official company policy on moderation is a common approach. The company determines what is and is not okay to say or do based on their worldview and the vision for the company. These moderation policies, meant to enforce some minimum bar of quality participation by its users, are crafted by euphemistically named teams such as the “Trust and Safety” team.


I say it’s euphemistic because they are censorship teams. They determine what you can say and do based on their collective preferences and beliefs. Some of the restrictions are enforced by law—such as child pornography—which is a great thing. But many policies are selected due to their preferences.


For example, Twitter has a policy that doesn’t allow users to display pornographic or violent content in profile pictures or header images. However, a user can tweet pornographic content. It will be obscured with a “sensitive content” label, which then puts the control in the hands of the user if the user chooses to click a button that then reveals the obfuscated material.


This approach is not governed explicitly by state or federal law. It is a moderation preference that reflects Twitter’s worldview and vision for the company. Similarly, if you choose to build an online community, you’ll have to start by designing the moderation policies to censor what can and can’t be said or done on your platform.


At Quora, our quality definition was aligned with the substance of questions and answers. We wanted a community that represented the best of human experiential knowledge. That meant that we were happy to remove questions that were antagonistic towards an individual, such as one user asking another user a very personal or accusatory question. Our policies also meant that we would remove answers with similar characteristics. We did not accept abusive answers such as people using f-bombs or attacking other users of our service. Civility was paramount, so we had company-created policies in place to preserve courtesy.

Community moderation

As your community begins to scale and evolve, you may need to enlist the help of others to help you identify and draft moderation policies to keep up with the changing nature of the community.


Several examples can be referenced. One version of community moderation that enforces quality participation is the Yelp Elite. Another version would be the now-defunct Quora moderators. An often-criticized example would be Wikipedia moderators.


Community moderation is tough. It’s like managing a growing classroom of students that all begin to think that they should be the teacher. Be careful when enlisting community moderation from people that aren’t official representatives of your company. If the community gets large enough and is left unchecked, they may come to believe that the content platform you’ve created is theirs, not yours. Wikipedia preferred this fully decentralized approach. It’s led to broad access to a lot of great content, but a long history of conflict and complaints as well. At one point, the moderators in charge of the Spanish version of the site went rogue in response to Wikipedia considering selling ads on the website. Proceed with caution when creating community moderation programs.

Features and Machine Learning

A superior alternative to community moderation is feature-based moderation. This approach is increasingly enabled by advancements in machine learning and produces better outcomes at scale than a team of human moderators.


You use products all the time that rely on UI features + machine learning to enhance the experience and maintain quality controls. If an answer on Quora receives enough downvotes then the answer will be “collapsed”, i.e. hidden. The ratio of likes to dislikes and the volume and velocity of likes helps Youtube determine which videos to highlight on their home screen versus eject into the ether.


Obviously, it takes a lot of data before these mechanisms can kick in and productively manage the quality of your user experience. When first getting started, you will have to rely mostly on human moderation. Thankfully, machine learning tools are becoming increasingly available so more startups will have access to these scalable moderation tools than in the past. What they may mean for your content platform is you can more quickly (or completely) bypass the community moderation step and move towards a machine learning-driven model compared to startups before you. Consider yourself lucky!

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

All posts
October 21, 2021
Portfolio
Unusual

Moderating and maintaining an online community

No items found.
Moderating and maintaining an online communityModerating and maintaining an online community
Editor's note: 

Thanks to Youtube, Reddit, Twitter, Facebook and so on, most of us are familiar with the important and complex role that moderation plays in massive online communities. There is no such thing as “perfect” community moderation so an ideal solution does not exist. But there are guideposts that can be followed when it comes to thinking through the role of moderation within your content-driven application.


The key thing to consider at the outset is why moderation is essential. To put it simply, it’s because people can be jerks and most people don’t want to be around a lot of jerks. Would you want to go to a town hall discussion about an election topic where any citizen could spout off at the mouth at another, saying vile and disruptive things, without recourse? Nope, me neither. In the analog world, we have policies and procedures to enforce decorum. If you shout down the judge, you’re held in contempt of court and carried away. Online communities require policies that enforce a code of conduct as well.


Here’s a user review left in the app store for the anonymous social app Whisper. It only took a few seconds to find it and is a great example of the downsides of an anonymous identity model and how difficult it is to moderate such platforms.


“A friend of mine recommended this app to me and it’s been great and all — minus all the incredibly narrow minded people on it and the men all older than 25 who really only use this app to send pictures of their body to other people. it’s a great way to spread peoples’ thoughts but without an identity. i think it’s a great idea, but really it’s not being used in the way it should...”


This commenter loves the idea of the app, but it falls apart in practice. Without a moderation model that requires people to use their real identity, it’s hard to hold people accountable for their actions. As a result, the quality of participation erodes and most people opt out of anonymous communities because they inevitably turn ugly. It’s the same reason that most major online publications have turned off comments on their articles.


As the creator of a content community, the question you must answer is “What does quality mean for my product, and how do I enforce it?” I can’t give you that answer since it’s unique to each online community. But I can talk about the various levers at your disposal when it comes to stitching moderation into your product from inception to scale.


From my perspective, there are three methods for moderating behavior within a content application:


  1. Official company policies
  2. Elected/chosen community moderators
  3. Product features + machine learning


Let’s talk about each.

Official company policies

An official company policy on moderation is a common approach. The company determines what is and is not okay to say or do based on their worldview and the vision for the company. These moderation policies, meant to enforce some minimum bar of quality participation by its users, are crafted by euphemistically named teams such as the “Trust and Safety” team.


I say it’s euphemistic because they are censorship teams. They determine what you can say and do based on their collective preferences and beliefs. Some of the restrictions are enforced by law—such as child pornography—which is a great thing. But many policies are selected due to their preferences.


For example, Twitter has a policy that doesn’t allow users to display pornographic or violent content in profile pictures or header images. However, a user can tweet pornographic content. It will be obscured with a “sensitive content” label, which then puts the control in the hands of the user if the user chooses to click a button that then reveals the obfuscated material.


This approach is not governed explicitly by state or federal law. It is a moderation preference that reflects Twitter’s worldview and vision for the company. Similarly, if you choose to build an online community, you’ll have to start by designing the moderation policies to censor what can and can’t be said or done on your platform.


At Quora, our quality definition was aligned with the substance of questions and answers. We wanted a community that represented the best of human experiential knowledge. That meant that we were happy to remove questions that were antagonistic towards an individual, such as one user asking another user a very personal or accusatory question. Our policies also meant that we would remove answers with similar characteristics. We did not accept abusive answers such as people using f-bombs or attacking other users of our service. Civility was paramount, so we had company-created policies in place to preserve courtesy.

Community moderation

As your community begins to scale and evolve, you may need to enlist the help of others to help you identify and draft moderation policies to keep up with the changing nature of the community.


Several examples can be referenced. One version of community moderation that enforces quality participation is the Yelp Elite. Another version would be the now-defunct Quora moderators. An often-criticized example would be Wikipedia moderators.


Community moderation is tough. It’s like managing a growing classroom of students that all begin to think that they should be the teacher. Be careful when enlisting community moderation from people that aren’t official representatives of your company. If the community gets large enough and is left unchecked, they may come to believe that the content platform you’ve created is theirs, not yours. Wikipedia preferred this fully decentralized approach. It’s led to broad access to a lot of great content, but a long history of conflict and complaints as well. At one point, the moderators in charge of the Spanish version of the site went rogue in response to Wikipedia considering selling ads on the website. Proceed with caution when creating community moderation programs.

Features and Machine Learning

A superior alternative to community moderation is feature-based moderation. This approach is increasingly enabled by advancements in machine learning and produces better outcomes at scale than a team of human moderators.


You use products all the time that rely on UI features + machine learning to enhance the experience and maintain quality controls. If an answer on Quora receives enough downvotes then the answer will be “collapsed”, i.e. hidden. The ratio of likes to dislikes and the volume and velocity of likes helps Youtube determine which videos to highlight on their home screen versus eject into the ether.


Obviously, it takes a lot of data before these mechanisms can kick in and productively manage the quality of your user experience. When first getting started, you will have to rely mostly on human moderation. Thankfully, machine learning tools are becoming increasingly available so more startups will have access to these scalable moderation tools than in the past. What they may mean for your content platform is you can more quickly (or completely) bypass the community moderation step and move towards a machine learning-driven model compared to startups before you. Consider yourself lucky!

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.