Use this lesson to:
- Summarize Facebook’s approach to protecting the brand safety of advertisers.
- Describe how Facebook’s policies and processes identify and remove prohibited content.
- Identify Facebook’s policies for publishers showing ads.
- Understand how we collaborate with industry partners to help us increase transparency and improve accountability.
What is brand safety?
When advertisers discuss brand safety, they can mean different things, but they’re often talking about preventing their ads from delivering within or next to content that’s not conducive to their brand.
We strive to ensure that brands feel safe when advertising across the Facebook family of apps and services by enforcing policies and offering relevant controls. There’s a spectrum of brand safety made up of different “risks” that may cause a brand concern: safety, sensitivity and suitability.
Brand safety isn’t one size fits all
Depending on how sensitive your brand is to content placement, you may choose to use all, some or none of our advertiser controls. Your ads should never show adjacent to or within content that violates our Community Standards, as such content is prohibited from all of our platforms.
In this course you’ll learn what we’re doing to help keep people and advertisers safe.
The Facebook community: Safe, supportive, inclusive
Our mission is to give people the power to build community and bring the world closer together. Our commitment to giving people a voice remains paramount, and for people to express themselves, they need to know they are safe. That’s why we have rules to determine what is and what isn’t allowed. As a general principle we agree that whilst people should be able to say things others may not like, they should not be able to say things that may put others in danger. It’s our responsibility to help ensure that communities are safe, civically-engaged, supportive, informed, and inclusive.
We want brands and people to feel safe
We developed our Community Standards to help keep people safe across the Facebook family of apps and services. Our goal is to allow everyone the freedom to share while ensuring that we remove content that’s harmful or illegal.
The best way to contribute to brand safety is to prevent harmful content from ever appearing on our services in the first place. We’ll never be perfect, but we continue to make investments in technology and people to limit as much harmful content as we can.
Facebook Community Standards and Instagram Community Guidelines outline what is and isn’t allowed on our platforms. They cover a wide range of topics, including hate speech, nudity, child exploitation and violent/graphic content. In addition, our ad policies describe the kinds of ads that are and are not allowed on our platforms. Our Community Standards, commerce and ad policies apply to the entire global community using Facebook platforms, so whether you’re an individual or business, you’re held accountable for the content you post. These policies guide how we take actions to review and remove inappropriate content that we or members of our community discover. We’re working to make it easier for people to report content that violates our standards so that we can review it and quickly remove it if it does violate our standards.
A multi-faceted approach to brand safety
Our approach includes three key areas:
- Create a safe and welcoming community.
- Maintain a high-quality ecosystem of content, publishers and ads.
- Proactive collaboration with industry partners.
At Facebook, we’re committed to improving our processes and tools to help keep our community safe. To do it, we’re investing in people, tools and processes, and policy.
How we identify and remove violating content
To track our progress and demonstrate our continued commitment to making Facebook and Instagram safe and inclusive, we released our first Community Standards Enforcement Report (CSER) in May 2018.
This report covers, among others, the following violation categories:
- Adult nudity and sexual activity
- Hate speech
- Terrorist propaganda
- Fake accounts
- Violence and graphic content
- Bullying and harassment
- Child nudity and sexual exploitation of children
- Illicit sales of regulated goods (such as firearms and drugs)
To learn more about how we identify and remove content that violates our standards and policies, please see: Community Standards Enforcement Report.
We work with external experts who evaluate our processes and data methodologies to help us increase transparency and improve accountability. For example, the Data Transparency Advisory Group, an independent body made up of international experts in measurement, statistics, criminology and governance, collaborates with us on the CSER.
This is the first of two lessons in the course: Brand Safety Across the Facebook Family of Apps and Services. Please continue with the next lesson, Advertiser Controls.