[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
PROMOTING SAFETY AND EXPRESSION

We are committed to protecting your voice and helping you connect and share safely.

When we’re open to hearing each other with tolerance and respect, we all benefit.

Social media has enabled more voices to be heard, but some people use it to do harm.

That’s why we have Community Standards that specify what’s allowed on our apps, and we remove anything that breaks these rules. For content that doesn’t violate our rules but has been rated false by independent fact-checkers, we reduce its distribution to help prevent it from going viral. We also provide context about what you see so you can make your own decisions on what to read, trust and share.


FINDING AND REMOVING VIOLATING CONTENT

We remove hate speech, harassment, threats of violence and other content that has the potential to silence others or cause harm.

Community Standards

We consult with experts around the world to review and regularly update our standards on what is and is not allowed on Meta.

Investments in Safety

We've more than quadrupled our safety and security teams to more than 40,000 people since 2016.

Finding Violating Content

We detect the majority of the content we remove before anyone reports it to us.

Blocking Fake Accounts

We block millions of fake accounts from being created every day.

Giving People Control Over What They See

We give you control over your experience by allowing you to block, unfollow or hide people and posts.

Tools to Prevent Bullying

You can moderate comments on your posts, and on Instagram, we warn you if you’re about to post something that might be offensive so you can reconsider.

a graphic illustration of a magnifying glass focusing on a chart

How We Improve

Learn how we update our Community Standards, measure results, work with others, and prioritize content for review.


REDUCING THE SPREAD OF MISINFORMATION

We work to limit the spread of misinformation and give you context to make your own decisions on what to read, trust and share.

Third-Party Fact-Checking Program

Today, we work with more than 90 partners covering over 60 languages around the world to review potentially false content.

Labeling Misinformation

We include warnings on posts that are rated false so that you can decide what to read or share.

Preventing Misinformation from Going Viral

When third-party fact-checkers label content false, we significantly reduce its distribution so fewer people see it.

a graphic illustration of a Facebook App post with a heart monitor and caution sign

Combating COVID-19 Misinformation

Since the World Health Organization declared COVID-19 a global public health emergency, we’ve been working to connect people to accurate information from health authorities and taking aggressive steps to stop misinformation and harmful content from spreading.


HOLDING OURSELVES ACCOUNTABLE

We won’t always get it right, so we invite people to appeal our content decisions. We also track and share our progress in making Meta's apps safer.

User Review and Appeal

People can report content to us or appeal certain decisions if they think we made a mistake in taking something down.

Oversight Board

This global body of experts will independently review Meta’s most difficult content decisions, and their decisions will be binding.

a graphic illustration of documents and a chart

Community Standards Enforcement Report

Every quarter, we release a report providing metrics on how we’re doing enforcing our content policies.

IMPACT HIGHLIGHT

We proactively detect more than 95% of hate speech on Facebook that we remove before anyone reports it to us.