AI Disclosure & EU AI Act Compliance

This policy explains how Vexor, operated by Nexa Group, uses artificial intelligence systems, how automated decisions are made, and the rights users have under the EU Artificial Intelligence Act, GDPR, and global transparency standards.

1. Overview of AI Use at Vexor

Artificial intelligence is a core component of Vexor’s safety, personalization, and scalability. Nexa Group uses AI systems for:

  • Content analysis and safety detection
  • Ranking and “For You” recommendation feeds
  • Spam filtering and bot detection
  • Age estimation signals and safety warnings
  • Fraud and risk mitigation
  • Operational efficiency and performance optimization

While AI assists in moderation and detection, final impactful decisions always involve a trained human reviewer.

2. Categories of AI Systems Used

Vexor uses multiple AI categories with different risk levels under the EU AI Act:

  • Low-risk AI systems: Recommendation ranking, feed personalization, spam reduction, comment prioritization, discovery algorithms.
  • Moderate-risk AI systems: Automated nudity detection, violent content detection, hate speech identification, bot-pattern analysis, synthetic media flagging.
  • High-risk AI (flagging only, not enforcement): Child sexual content detection, extremism signals, grooming-pattern analysis, self-harm and suicide indicators, terrorism-related content detection.
    These systems only flag content. Human moderators make final decisions.

3. No Fully Automated Enforcement Decisions

Under Nexa Group policy and in alignment with the EU AI Act, Vexor does not use fully automated systems to permanently enforce high-impact actions. This includes:

  • Permanent account bans
  • Long-term suspensions
  • Content removal for serious violations
  • Creator monetization disabling
  • Age verification decisions

A human reviewer always confirms the final decision.

4. User Right to Human Review

In accordance with GDPR Articles 21 & 22 and EU AI Act transparency requirements, all Vexor users have the right to request a human review of:

  • Content removals based on automated detection
  • Shadow-limits or reduced visibility signals
  • Account warnings or strikes
  • Suspensions and monetization restrictions
  • Age estimation or age verification outcomes

To request human review, submit an appeal to: Email: appeals@vexor.to

5. AI Transparency Obligations

Nexa Group commits to transparent, responsible communication about AI use. Vexor will:

  • Label system messages influenced by automated tools
  • Notify users when AI contributes to moderation signals
  • Disclose which features use algorithmic personalization
  • Provide clear explanations for high-impact enforcement actions
  • Publish AI-related safety and accuracy updates

Vexor does not use AI for behavioral manipulation, non-consensual profiling, or discriminatory targeting.

6. Data Protection, Fairness & GDPR Alignment

All AI models used by Nexa Group comply with GDPR principles:

  • Lawfulness, fairness, transparency in automated processing
  • Purpose limitation for safety and platform functionality
  • Data minimization to avoid unnecessary personal data collection
  • Accuracy through continuous model evaluation
  • Security using encrypted storage and internal access controls
  • Non-discrimination to ensure no bias against protected groups

7. Safety Testing, Evaluation & Bias Prevention

Nexa Group evaluates all AI models—both internally and through third-party audits—to ensure:

  • Low false-positive and false-negative rates
  • Resilience against evasion methods
  • No disproportionate impact on protected characteristics
  • Improved detection accuracy and fairness over time
  • Compliance with EU AI Act risk classification and testing requirements

Models involved in child safety, extremism, and public harm undergo the strictest reviews.

8. User Disclosure for AI-Generated or Manipulated Media

Users must label:

  • AI-generated videos
  • Deepfakes or synthetic human likeness
  • Digitally altered voices or appearances

Failure to label manipulated media may result in visibility limits or enforcement.

9. Contact Information

For questions about AI, transparency, or automated systems:

AI Transparency Office: ai-transparency@vexor.to
Data Protection Officer (DPO): dpo@vexor.to
Nexa Group Legal: legal@nexa-group.org

Дали Ви помогна овој одговор? 0 Корисниците го најдоа ова како корисно (0 Гласови)