User Safety Center

The Vexor Safety Center outlines our platform-wide commitment to community protection, safety standards, reporting mechanisms, and enforcement procedures. Our systems are designed to protect creators, viewers, and communities through proactive technology, clear rules, and rapid incident response.

1. Our Commitment to Safety

Vexor is dedicated to maintaining a secure, respectful, and supportive environment. To achieve this, we combine:

  • Advanced machine learning for real-time risk detection
  • Specialized moderation teams operating 24/7
  • Behavioral analysis and anomaly monitoring
  • Strict enforcement of Community Guidelines
  • Clear user tools for reporting or blocking harmful behavior

Safety is embedded into our platform architecture, from algorithmic ranking systems to privacy controls and content evaluation workflows.

2. Protection of Minors

Child safety is Vexor’s highest priority. We enforce enhanced protections for minors and prohibit:

  • Any sexual content involving minors (immediate removal, permanent ban, law enforcement report)
  • Adults attempting to contact minors for harmful or inappropriate purposes
  • Videos depicting minors in dangerous, risky, or exploitative situations
  • Collection or misuse of data from children under 13

All suspicious accounts or content involving minors are escalated to trained child-safety response specialists and may be referred to authorities, including NCMEC and local police.

3. Harassment, Bullying & Abuse

Vexor does not tolerate harassment, intimidation, or targeted attacks. We remove or restrict:

  • Threats of violence or harm
  • Insults or slurs targeting protected characteristics
  • Mass-reporting campaigns intended to silence users
  • Stalking, unwanted messaging, or repeated negative contact
  • Attempts to shame, embarrass, or humiliate other users

Users who engage in harassment may face warnings, feature restrictions, or permanent account removal depending on severity.

4. Mental Health & Self-Harm Prevention

Vexor is committed to supporting users experiencing emotional distress. Content involving self-harm or suicidal intent may be:

  • Hidden or limited to prevent harmful spread
  • Reviewed by safety professionals trained in crisis content
  • Linked to crisis support tools, hotlines, and mental health resources

We may also proactively reach out with in-app support resources where appropriate.

5. Dangerous Challenges & Harmful Behavior

Vexor prohibits content that promotes or glorifies unsafe or illegal activities. This includes:

  • Risky stunts or challenges that may result in injury
  • Drug use, distribution, or promotion of illegal substances
  • Detailed instructions for harm, violence, or weapon misuse

Such content is removed, and repeat offenders may be banned.

6. Privacy & Personal Information Safety

Users must never expose others’ private information. Vexor strictly prohibits:

  • Doxxing (addresses, phone numbers, documents, etc.)
  • Sharing private conversations without consent
  • Impersonating real people, brands, or officials
  • Pressuring users to reveal personal information

Violations may lead to account restrictions or permanent bans.

7. Reporting Tools

Users can report harmful content or behavior directly within the app. Reports may be filed for:

  • Harassment, hate speech, or threats
  • Child safety concerns or exploitation
  • Dangerous acts or self-harm
  • Misinformation or illegal activities
  • Sexual content violations

Each report is reviewed by trained moderation professionals, with child-safety cases receiving immediate priority.

8. Blocking & Restricting Users

Users can take direct action to protect themselves, including:

  • Blocking: Prevents another user from viewing or interacting with your content.
  • Restricting: Limits message visibility and comment access from specific users.
  • Privacy Controls: Options to make your account private, limit comments, disable DMs, and more.

9. Content Moderation & Enforcement

Depending on the severity of the violation, Vexor may apply:

  • Content removal or muting
  • Warning notices and educational prompts
  • Temporary feature or account restrictions
  • Algorithmic downranking (visibility reduction)
  • Permanent bans for severe or repeated violations
  • Escalation to law enforcement when legally required

10. Appeals Process

If you believe an enforcement action was incorrect, you may submit an appeal via the app or through Vexor Support. All appeals are manually reviewed by senior Safety personnel and may result in reinstatement or confirmation of the original action.

11. Safety for Creators

Creators benefit from additional protections such as:

  • Keyword filters and comment moderation tools
  • Spam detection and automated bot removal
  • Anti-harassment features and reporting shortcuts
  • Enhanced review for monetized accounts

12. Contact the Vexor Safety Team

Email: safety@vexor.to
Support: support@vexor.to
Emergency Reports: emergency@vexor.to

Cette réponse était-elle pertinente? 0 Utilisateurs l'ont trouvée utile (0 Votes)