Content Moderation

Maintaining high-quality, appropriate content across the platform.

Moderation Policies

Prohibited Content

  • Hate speech

  • Harassment

  • Explicit content

  • Misinformation

  • Spam

  • Copyright violations

Restricted Content

  • Political content

  • Medical claims

  • Financial advice

  • Legal advice


Moderation Process

1. Automated Screening

All content passes through automated filters:

  • Profanity detection

  • Spam identification

  • Pattern matching

  • ML-based classification

2. Human Review

Flagged content reviewed by moderators:

  • Context evaluation

  • Policy application

  • Decision making

  • User notification

3. Appeals

Users can appeal moderation decisions:

  • Submit appeal

  • Additional review

  • Final decision

  • Transparency report


AI-Exclusive Content

Special moderation for AI-exclusive content:

Insider Tips: Must be genuine, helpful insights Local Secrets: Must be accurate, non-promotional Behind the Scenes: Must be authentic, interesting


Quality Standards

Content Quality

  • Accurate information

  • Clear writing

  • Helpful details

  • Professional tone

Photo Quality

  • High resolution

  • Good lighting

  • Relevant content

  • No watermarks


Learn More: Best Practices →

Last updated