Sensitive Events & Crisis Misinformation Policy

This Policy describes how Vexor responds to wars, natural disasters, terrorist attacks, public-health emergencies, and other sensitive events. It explains how we limit harmful misinformation, protect affected communities, and enforce higher standards of responsibility during periods of heightened risk.

1. Definition of Sensitive Events

“Sensitive events” are situations that involve real-world harm, humanitarian impact, or elevated public risk. These events require special handling and stricter enforcement standards due to their potential to cause panic, real-world harm, or secondary victimization. Examples include:

  • Armed conflicts, wars, military operations, and geopolitical crises
  • Terrorist attacks, mass shootings, and other large-scale violence
  • Natural disasters (earthquakes, floods, wildfires, hurricanes, storms)
  • Humanitarian emergencies (large-scale displacement, famine, refugee crises)
  • Pandemics, epidemics, and serious public-health emergencies
  • Industrial accidents, infrastructure failures, and major transportation disasters

During such events, Vexor may temporarily adjust recommendation systems, monetization rules, and content moderation thresholds to prioritize safety and accuracy.

2. Prohibited Content in Sensitive Events

Vexor may remove, restrict, or down-rank content related to sensitive events when it:

  • Denies, trivializes, or glorifies real-world tragedies, including content that mocks victims, dismisses suffering, or celebrates violence.
  • Spreads dangerous medical or emergency misinformation, such as false claims about cures, evacuation instructions, or emergency response procedures.
  • Targets victims, affected communities, or protected groups with harassment, hate speech, or dehumanizing narratives linked to the crisis.
  • Encourages violence, looting, or opportunistic exploitation in the context of disasters or armed conflicts.
  • Fabricates crisis footage or uses misleading edits (e.g., old videos presented as current events, AI-manipulated content presented as real without disclosure) that could cause panic or materially mislead viewers.
  • Promotes scams and frauds exploiting donations, relief campaigns, or humanitarian aid for personal gain.

Severe violations may result in immediate account suspension or permanent termination, especially where there is evidence of organized manipulation or coordinated harm.

3. Misinformation Controls in Crises

During sensitive events, Vexor activates additional safeguards to reduce the spread of harmful or misleading information. Measures may include:

  • Down-ranking of unverified or misleading crisis content in recommendations, trending surfaces, and search results, especially where claims contradict trusted public information.
  • Partnerships with fact-checkers, NGOs, and public information authorities to identify high-risk narratives, clarify false claims, and apply corrective labels.
  • Context labels and information panels linking to authoritative health, safety, or election resources (e.g., WHO, public health agencies, civil protection authorities).
  • Reduced amplification of borderline or sensational content that could inflame tensions, incite hatred, or exploit suffering, even if it does not meet the threshold for removal.
  • Temporary restrictions on certain hashtags, search terms, or trends that are heavily abused for misinformation or targeted harassment during a crisis.

4. Monetization & Sensitive Event Content

Vexor may limit or disable monetization for content related to ongoing crises, particularly when commercial gain risks undermining public trust or exploiting human suffering. Examples include:

  • Demonetization of videos that depict or discuss active wars, attacks, or disasters in a sensational or clickbait style.
  • Restrictions on ads appearing next to crisis-related content to protect brand and user trust.
  • Additional review for charity, fundraising, or donation campaigns claiming to support victims or relief efforts.

Legitimate educational, news, or awareness content may remain eligible for monetization where it meets our Creator Monetization Policy and does not engage in exploitation, misinformation, or harm.

5. User Tools for Crisis Misinformation & Harm

Users play an important role in identifying harmful or misleading content during crises. Vexor provides:

  • Report Tools: Users can report crisis-related misinformation, hate, threats, or scams directly via the in-app “Report” features on videos, profiles, comments, and messages.
  • Mute & Block Features: Users can mute or block accounts spreading harmful narratives, harassment, or distressing crisis content.
  • Safety & Help Center Resources: Guidance on recognizing and reporting crisis misinformation, scams, and harmful propaganda.

Reports related to sensitive events may be prioritized in moderation queues depending on the potential for real-world harm.

6. Enforcement & Escalation

When crisis-related violations are identified, Vexor may:

  • Remove or age-restrict the violating content
  • Apply warning labels or context panels
  • Issue warnings, temporary suspensions, or strike-based penalties
  • Disable monetization or promotional eligibility
  • Permanently terminate accounts for severe or repeat abuses

Where threats, terrorism, or serious crimes are involved, Vexor may escalate matters to law enforcement in accordance with our Law Enforcement Request Guide and applicable laws.

7. Collaboration with Authorities & NGOs

Vexor may cooperate with:

  • Public health agencies during epidemics or pandemics
  • Emergency management agencies during disasters
  • Trusted fact-checking organizations and civil society groups
  • International crisis response organizations for information verification

These collaborations help ensure that high-impact misinformation is quickly identified and mitigated.

8. Contact

For crisis-related policy questions, high-risk misinformation, or coordination:

Crisis & Events Team: crisis@vexor.to
Safety & Policy: safety@vexor.to
General Support: support@vexor.to

Esta resposta foi útil? 0 Utilizadores acharam útil (0 Votos)