Friendium Anti-Harassment Filters Policy

This Anti-Harassment Filters Policy explains how Friendium deploys automated and user-controlled filtering systems to prevent harassment, abuse, intimidation, and unwanted interactions, while preserving legitimate speech on a real-identity social network operated by Nexa-Group.

1. Purpose & Safety Objectives

Friendium is designed to foster respectful, authentic interaction between real people. Anti-harassment filters exist to proactively reduce harm, minimize exposure to abusive content, and empower users to control their experience without requiring constant manual moderation.

These systems are a core component of Friendium’s safety architecture and complement Community Standards, Reporting mechanisms, and human review.

2. Scope of Filters

Anti-harassment filters apply across multiple interaction surfaces, including:

  • Comments and replies
  • Direct messages (DMs)
  • Mentions and tags
  • Profile posts and wall interactions
  • Group discussions and event comments

3. Categories of Filtered Content

Filters may automatically detect, flag, limit, or block content associated with:

  • Harassment, bullying, or intimidation
  • Hate speech or discriminatory language
  • Threats of violence or harm
  • Sexual harassment or degrading remarks
  • Repeated unwanted contact
  • Obscene or aggressive language patterns

4. Keyword & Phrase Filtering

Friendium uses keyword-based filtering to identify commonly abusive or harmful terms. Users may also:

  • Add custom keywords to personal filter lists
  • Block phrases associated with harassment
  • Filter content in multiple languages
  • Apply filters selectively to comments or messages

5. Behavioral Pattern Detection

In addition to keywords, Friendium evaluates behavioral signals, including:

  • High-frequency messaging to a single user
  • Repeated negative interactions across posts
  • Coordinated harassment patterns
  • Escalating tone or language severity

These systems are designed to identify harassment even when explicit slurs are absent.

6. Automated Action Types

Depending on severity and context, filters may:

  • Hide content from the recipient
  • Require manual approval before display
  • Limit visibility to the sender only
  • Trigger warning screens
  • Escalate content for human review

7. User-Controlled Filter Settings

Users can customize their anti-harassment protections by:

  • Enabling strict, standard, or relaxed filtering modes
  • Filtering content from non-connections
  • Restricting messages from newly created accounts
  • Blocking repeated commenters automatically

8. Protection for Vulnerable Users

Additional safeguards may be automatically enabled for:

  • Minors and young adults
  • Users experiencing active harassment campaigns
  • Users targeted due to protected characteristics
  • Users involved in sensitive or high-risk discussions

9. Transparency & Explainability

When content is filtered or limited, Friendium may provide:

  • Notifications explaining the restriction
  • General reasoning categories (e.g., harassment detection)
  • Links to relevant policies

To prevent abuse of the system, exact detection thresholds are not disclosed.

10. False Positives & User Appeals

Friendium acknowledges that automated systems are not perfect. Users may:

  • Appeal filtered or restricted content
  • Request human review
  • Provide contextual explanations

Successful appeals may improve future filter accuracy.

11. Interaction With Other Safety Systems

Anti-harassment filters work alongside:

  • Blocking and muting tools
  • Restricted lists
  • Reporting & escalation workflows
  • Repeat offender tracking

12. Enforcement Escalation

Repeated triggering of filters may lead to:

  • Temporary messaging or commenting limits
  • Account feature restrictions
  • Mandatory safety acknowledgments
  • Suspension or permanent removal

13. Good-Faith Expression

Filters are not intended to suppress respectful disagreement, satire, or critical discussion. Friendium balances safety with freedom of expression by evaluating intent, context, and harm.

14. Legal & Regulatory Alignment

Filtering systems are designed to support compliance with global online safety, anti-harassment, and child-protection laws, while respecting lawful speech rights.

15. Continuous Improvement

Friendium regularly updates filter models based on:

  • User feedback
  • Moderator review outcomes
  • Emerging abuse patterns
  • Regulatory guidance

16. Policy Updates

This policy may be revised as new risks, technologies, or legal obligations arise. Continued use of Friendium constitutes acceptance of updated filtering practices.

17. Contact & Support

Safety & Harassment Reports: safety@friendium.com
Policy Questions: trust@friendium.com
User Support: support@friendium.com

Bu cevap yeterince yardımcı oldu mu? 0 Bu dökümanı faydalı bulan kullanıcılar: (0 Oy)