top of page

So your police department’s social post was removed for content violations

Published in Police One, April 28, 2022


Sometimes, police need to share disturbing pictures or video on social media to catch criminals, but content moderation is a tricky subject. Here’s what agencies need to know.


Everyone knows that police work isn’t always pretty. If you’re using social media to engage with your community, your posts won’t always be cops and kids playing basketball. Often, you will need to share content that might make people uncomfortable, like photos of wanted suspects or security videos of violent crimes.


Police should be posting this type of content because their social media channels can reach many people, which can lead to tips and arrests. However, sometimes this content may be removed or restricted from certain social media platforms, or members of the public may complain about its graphic nature. How do police departments decide when to share violent content that could aid a criminal investigation? And what happens when social media moderators disagree? Before we can answer those questions, we need to travel back to the dawn of the internet.


A BRIEF HISTORY OF ONLINE CONTENT MODERATION

Content moderation on social media platforms is perhaps one of the most widely discussed topics in online policy, particularly when it comes to violence, threats and “hate speech,” which means different things to different people.


To better understand content moderation, we need to look at Section 230 of the 1996 Communications Decency Act, also known as “the 26 words that built the internet.” Section 230 is the reason why social media platforms can almost never be sued for content published by users on their platforms. Here’s what Section 230 says:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Some people misread Section 230 as permission for platforms to publish whatever they want without worrying about consequences. In reality, Section 230 allows platforms to moderate more, not less. Stay with me.


Before Section 230 was written, websites that tried to moderate could be held liable for inappropriate content because the platform had appeared to have reviewed the content and determined it acceptable. This imperfect discernment made social media platforms vulnerable to lawsuits that accused them of letting bad content slip through the cracks. It became easier to not moderate at all. Thus, Section 230 was created to allow companies to moderate content and to protect them when they get it wrong.


TO POST OR NOT TO POST

When drafting a social media post, have a clear goal in mind. Remember, the investigation comes first. If you are looking to identify a suspect, your post should make it obvious that that is your goal. Put the suspect’s identifying information front and center in your Tweet, Instagram caption, or Facebook post.

But what about the brutality of the incident? This is where it gets tricky. If the crime is especially jarring, a graphic video will likely help spread the case to more people, which may generate more leads for investigators. That said, you want to be balanced in your presentation of violent content. Don’t show more than what is needed to make your point. The graphic nature of a crime should be shared if it can serve the purpose of the investigators, not to make a political point.


POLICE DEPARTMENT SOCIAL MEDIA ACCOUNTS MUST WALK A FINE LINE

In short, the field is a bit of a mess. Social media platforms like Twitter, Facebook and YouTube often use a combination of machine learning, user reports and human moderators to determine what should be removed from the platform for violating their terms of service. Sometimes they get it right, but more often, inappropriate content slips through the cracks, or acceptable content is caught in a wide net and removed without good reason.


Sometimes, moderators may compromise by marking content as sensitive or hiding it behind an age restriction wall. Unfortunately, these compromises will significantly decrease your audience because most people won’t take the extra step to click on a link or verify their age. Police departments must be aware of these pitfalls and try to walk between the raindrops when sharing potentially sensitive content.


WHAT CAN MY DEPARTMENT DO IF ITS POST IS TAKEN DOWN BY MODERATORS?

If you can, develop a relationship with your social media company’s government representative. That way, if you do feel like you were restricted unjustly, you can appeal it or at least get clarification. It’s also helpful to connect with other people in your field through different organizations such as the IACP Public Information Officers section to trade tips, tricks, and yes, commiseration.

For what it’s worth, it seems like no platform has found the perfect content moderation formula. In fact, these companies often admit that moderation is a haphazard field in need of fine-tuning and optimization. It may take a while until companies and users get that balance right. Until then, it’s helpful to know that the moderation of police investigation content isn’t necessarily a targeted policy choice, but rather a longstanding problem that is shaping the future of online discourse as we speak.



Recent Posts

See All

Misinformation and Policing

Here is an opinion piece I published in Newsweek in September, 2022 If you read the mainstream media or listen to technology experts, you will at some point hear about the danger of misinformation. It

Afraid of social media? Here’s where to start

My latest for American Police Beat If you don’t know a lot about social media, chances are you find it slightly intimidating. And if you are more familiar with it, well, you probably find it very inti

bottom of page