Crisis Safety & Self-Harm Prevention Policy
Vexor, operated by Nexa Group, is committed to protecting users from self-harm, suicide risk, and dangerous behavior. This policy explains how we detect, review, and respond to crisis-related content and how we support users who may be at risk.
1. Purpose & Scope
This Crisis Safety & Self-Harm Prevention Policy applies to all content, users, and activities on Vexor. It covers:
- Content depicting or discussing self-harm or suicidal ideation.
- Dangerous challenges or behaviors that may cause serious injury.
- Support resources and interventions for users in distress.
- Our obligations to protect minors and vulnerable individuals.
Our objective is to balance open discussion of mental health with strong safeguards against content that could directly or indirectly encourage harm.
2. Prohibited Content
Vexor strictly prohibits content that could reasonably be interpreted as encouraging, facilitating, or glorifying self-harm or suicide, including:
- Instructions or step-by-step descriptions related to self-harm or suicide.
- Content that praises, romanticizes, or normalizes self-harm or suicidal behavior.
- Graphic or shocking imagery showing self-injury, severe wounds, or attempts.
- Content that encourages others to harm themselves or participate in dangerous acts.
- Dangerous “challenges” or trends that risk severe physical or psychological harm.
- Content depicting minors engaged in harmful or self-destructive behavior.
Such content may be removed immediately, and the associated account may be restricted or permanently banned depending on severity and context.
3. Allowed but Sensitive Content
Vexor recognizes the importance of open, honest conversations about mental health. Certain content is allowed but treated as sensitive and may be limited, age-gated, or labeled:
- Non-graphic mental health discussions — Users describing their feelings, struggles, or experiences in a non-instructional, non-romanticized manner.
- Survivor stories — Personal stories of recovery, coping, or resilience that do not provide explicit harmful details.
- Educational or awareness material — Content from professionals, organizations, or creators raising awareness about mental health, suicide prevention, and coping strategies.
In some cases, Vexor may limit recommendations, add warnings, or redirect users to mental health resources when sensitive topics are discussed.
4. AI Detection & Human Review
To identify potential crisis situations as early as possible, Vexor uses a combination of automated systems and human moderators:
- Automated Detection: AI models may scan for:
- Textual indicators (keywords, phrases, captions, comments).
- Visual indicators (imagery associated with self-harm or dangerous acts).
- Behavioral patterns (sudden shifts in content tone, repeated distress signals).
- Human Moderation: Content flagged as high risk is escalated to trained moderation teams who review context, user history, and potential harm.
Automated systems do not make final decisions on complex or high-risk cases without human review, especially where crisis or self-harm is involved.
5. Crisis Intervention & Platform Response
When Vexor identifies content or behavior suggesting imminent self-harm or severe psychological distress, we may take one or more of the following steps:
- Immediate Content Action: Remove or restrict access to highly harmful content to prevent further exposure or imitation.
- Account Wellness Check: Send in-app notifications or messages encouraging the user to seek support, consider pausing content creation, or use wellness tools.
- Crisis Resources: Display specialized safety panels that include mental health hotlines, chat services, and local support organizations.
- Emergency Escalation: In cases suggesting imminent life-threatening danger, we may escalate to relevant authorities or crisis partners, where legally permissible and operationally feasible.
6. Support Resources & Localization
Vexor strives to provide localized mental health resources where possible. We may display:
- National or regional suicide prevention hotlines.
- Links to mental health organizations and professional associations.
- Guides on coping strategies and seeking professional help.
- Emergency contact reminders (e.g., encouraging users to contact local emergency services).
Resource availability may differ depending on user location and local infrastructure.
7. User Reporting of Self-Harm Concerns
Any user can report content or behavior that appears to involve self-harm or suicidal risk. Reporting options include:
- In-app report tools on videos, comments, or profiles.
- Dedicated safety contact channels such as safety@vexor.to.
Reports flagged as self-harm or crisis-related are prioritized for expedited review.
8. Minors & Vulnerable Users
Extra safeguards apply when minors are involved:
- Stricter thresholds for content removal and account intervention.
- Faster escalation to safety teams.
- Higher sensitivity to signs of bullying, harassment, or grooming that may contribute to self-harm risk.
- Cooperation with child protection organizations and law enforcement where required by law.
9. Privacy, Data Handling & Crisis Response
Vexor handles crisis-related data in line with our Privacy Policy, Data Retention & Deletion Policy, and applicable laws:
- Only authorized safety, security, or legal personnel may access crisis-related records.
- Data may be preserved when required to protect user safety or respond to legal requests.
- Any sharing of information with external crisis partners or law enforcement is done in strict compliance with applicable privacy and safety regulations.
10. Creator Responsibilities
Creators discussing mental health topics must:
- Avoid sharing explicit or instructional details about self-harm.
- Avoid glamorizing, romanticizing, or normalizing self-harm or suicide.
- Use supportive, recovery-focused, and non-triggering language.
- Comply with local regulations, medical advertising rules, and professional ethics if presenting as a health professional.
11. Education, Prompts & In-App Safety Features
To prevent harm before it occurs, Vexor may use:
- Pre-publish prompts warning users when captions or content appear high-risk.
- Labels or interstitials on sensitive mental health content.
- Links to crisis support resources on relevant videos.
- Educational content in Safety Centers and help articles.
12. Law Enforcement & Crisis Partner Cooperation
In exceptional circumstances involving imminent danger, Vexor may cooperate with:
- Local law enforcement agencies.
- National crisis centers or hotlines (where partnerships exist).
- Child protection units for incidents involving minors.
This is conducted under strict legal, privacy, and proportionality standards.
13. Appeals & Content Restoration
If content is removed or restricted under this policy and a user believes it was a mistake:
- They may submit an appeal through the in-app appeal function or by contacting appeals@vexor.to.
- Appeals are reviewed by a separate moderation or safety team.
- If the decision is reversed, content may be restored where appropriate and lawful.
14. Emergency Guidance
If you or someone you know is in immediate danger or at risk of self-harm or suicide, please contact local emergency services right away. Vexor is not a crisis hotline and cannot replace professional medical or psychological care.
15. Contact Information
For crisis safety questions, reporting, or escalation:
Safety Team: safety@vexor.to
Emergency Channel (High-Risk Reports): emergency@vexor.to
General Support: support@vexor.to
16. Updates to This Policy
Vexor may update this Crisis Safety & Self-Harm Prevention Policy periodically to reflect best practices, regulatory changes, partner guidance, and operational learnings. Continued use of Vexor after any update constitutes acceptance of the revised policy.