Brand Safety Across the Facebook Family of Apps and Services

Use this lesson to:

  • Summarize Facebook’s approach to protecting the brand safety of advertisers.
  • Describe how Facebook’s policies and processes identify and remove prohibited content.
  • Identify Facebook’s policies for publishers showing ads.
  • Understand how we collaborate with industry partners to help us increase transparency and improve accountability.

What is brand safety?

When advertisers discuss brand safety, they can mean different things, but they’re often talking about preventing their ads from delivering within or next to content that’s not conducive to their brand.

We strive to ensure that brands feel safe when advertising across the Facebook family of apps and services by enforcing policies and offering relevant controls. There’s a spectrum of brand safety made up of different “risks” that may cause a brand concern: safety, sensitivity and suitability.

 

Select the cards below to learn more about the brand risk spectrum.

Brand safety isn’t one size fits all

Depending on how sensitive your brand is to content placement, you may choose to use all, some or none of our advertiser controls. Your ads should never show adjacent to or within content that violates our Community Standards, as such content is prohibited from all of our platforms.

In this course you’ll learn what we’re doing to help keep people and advertisers safe.

The Facebook community: Safe, supportive, inclusive

Our mission is to give people the power to build community and bring the world closer together. Our commitment to giving people a voice remains paramount, and for people to express themselves, they need to know they are safe. That’s why we have rules to determine what is and what isn’t allowed. As a general principle we agree that whilst people should be able to say things others may not like, they should not be able to say things that may put others in danger. It’s our responsibility to help ensure that communities are safe, civically-engaged, supportive, informed, and inclusive.

We want brands and people to feel safe 

We developed our Community Standards to help keep people safe across the Facebook family of apps and services. Our goal is to allow everyone the freedom to share while ensuring that we remove content that’s harmful or illegal.

The best way to contribute to brand safety is to prevent harmful content from ever appearing on our services in the first place. We’ll never be perfect, but we continue to make investments in technology and people to limit as much harmful content as we can.

Facebook Community Standards and Instagram Community Guidelines outline what is and isn’t allowed on our platforms. They cover a wide range of topics, including hate speech, nudity, child exploitation and violent/graphic content. In addition, our ad policies describe the kinds of ads that are and are not allowed on our platforms. Our Community Standards, commerce and ad policies apply to the entire global community using Facebook platforms, so whether you’re an individual or business, you’re held accountable for the content you post. These policies guide how we take actions to review and remove inappropriate content that we or members of our community discover. We’re working to make it easier for people to report content that violates our standards so that we can review it and quickly remove it if it does violate our standards.

A multi-faceted approach to brand safety

Our approach includes three key areas:

  • Create a safe and welcoming community.
  • Maintain a high-quality ecosystem of content, publishers and ads.
  • Proactive collaboration with industry partners.

Our commitment

At Facebook, we’re committed to improving our processes and tools to help keep our community safe. To do it, we’re investing in people, tools and processes, and policy.

How we identify and remove violating content

To track our progress and demonstrate our continued commitment to making Facebook and Instagram safe and inclusive, we released our first Community Standards Enforcement Report (CSER) in May 2018.

This report covers, among others, the following violation categories:

  • Adult nudity and sexual activity
  • Hate speech
  • Terrorist propaganda
  • Fake accounts
  • Spam
  • Violence and graphic content
  • Bullying and harassment
  • Child nudity and sexual exploitation of children
  • Illicit sales of regulated goods (such as firearms and drugs)

To learn more about how we identify and remove content that violates our standards and policies, please see: Community Standards Enforcement Report.

Partner collaboration

We work with external experts who evaluate our processes and data methodologies to help us increase transparency and improve accountability. For example, the Data Transparency Advisory Group, an independent body made up of international experts in measurement, statistics, criminology and governance, collaborates with us on the CSER.

We collaborate with industry partners to share knowledge, build consensus and make all online platforms safer for businesses.

For example:

  • We completed JICWEBS‘ Digital Trading Standards Group’s brand safety audit, receiving the IAB UK Gold Standard.
  • We work with the World Federation of Advertiser’s Global Alliance for Responsible Media (GARM) to help create a more sustainable and responsible digital ecosystem.
  • We collaborate with industry organizations to help improve how review content and enforce our Community Standards.

Brand safety across platforms and placements

Three key components:

  1. Our policies and enforcement (Facebook Community Standards, Instagram Community Guidelines, commerce and ad policies) dictate what’s allowed or not allowed on our platforms.
  2. Our Partner Monetization Policies provide clear guidance around the types of publishers and creators eligible to earn money on Facebook and the kind of content that can be monetized.
  3. Our brand safety controls give advertisers confidence and control over where their ads appear.

Partner Monetization Policies: Additional standards

Facebook helps creators and publishers earn money from their content. This can include showing ads in content or working with a business partner to promote a brand or product.

Not all content is appropriate for monetization. To use our monetization features and maintain a positive experience for our community and a brand-safe environment for our advertisers, creators and publishers must comply with our:

  • Community Standards
    Creators and publishers that have violated our Community Standards, including our community policies regarding intellectual property, authenticity and user safety, may be ineligible or may lose their eligibility to monetize using our features.
  • Payment Terms
    Creators and publishers must comply with Facebook’s Payment Terms or may lose their eligibility to use our monetization features.
  • Page Terms
    Creators who publish content from a Page must comply with our Page Terms or may lose their eligibility to use our monetization features.
  • Partner Monetization Policies
    To be eligible for monetization, content must adhere to our Partner Monetization Policies. Content that does not meet these criteria may be ineligible for monetization, and repeated violations of this criteria may result in removal of access to our monetization features.

Creators and publishers must also:

  • Share authentic content
    • Creators and publishers who post content flagged as misinformation or false news may be ineligible or may lose their eligibility to monetize. Learn more.
    • Creators and publishers who share clickbait or sensationalism may be ineligible or lose their eligibility to monetize. Learn more.
  • Develop an established presence
    Creators and publishers must have an authentic, established presence on Facebook. To be eligible for all monetization features, this means having an established presence for at least one month. For in-stream video, this includes maintaining a sufficient number of Facebook friends or followers.

We’re getting better at proactively removing violating content

We’re improving when it comes to proactive enforcement. When it comes to content that we took action on, we proactively detected over 90% in almost all policy areas.1

Key takeaways

  • Facebook is committed to keeping our community safe.
  • The best way to contribute to brand safety is to prevent harmful content from ever appearing on our services in the first place.
  • Brand safety at Facebook means ensuring that brands feel safe advertising across the Facebook family of apps and services.
  • There’s a spectrum of brand safety “risks” that may cause a brand concern: safety, sensitivity and suitability.
  • We rely on a variety of tools and processes including human review, automated tools, community reporting and expert consultation to help us identify and remove content that violates our standards.
  • We’ll continue to develop and improve an open, independent, and rigorous policy-making process, to ensure that our services are a positive force for bringing people closer together.

Keep learning

This is the first of two lessons in the course: Brand Safety Across the Facebook Family of Apps and Services. Please continue with the next lesson, Advertiser Controls.

Leave a Reply

Your email address will not be published. Required fields are marked *