Platform Transparency Report

This Transparency Report outlines how Vibble, a Nexa-Group platform, handles content moderation, law enforcement and government requests, user appeals, safety interventions, and integrity protections. It is designed to meet global transparency expectations under the EU Digital Services Act (DSA), UK Online Safety Act, and other international regulatory frameworks.

1. Introduction & Transparency Principles

Vibble is committed to operating as an accountable and transparent microblogging platform. Transparency is a core pillar of our trust & safety strategy and underpins our relationship with users, creators, advertisers, regulators, and civil society partners.

This report provides a structured summary of our enforcement actions, moderation systems, law enforcement cooperation, and user rights. In pre-launch or early-access phases, metrics may be minimal or set to zero; the frameworks, however, are fully defined and ready for scale.

2. Scope of this Transparency Report

This Transparency Report covers the following areas for the current reporting period (quarterly and annually aggregated):

  • Content moderation volumes and reasons for enforcement
  • Account-level actions (warnings, restrictions, suspensions, bans)
  • Appeal volumes and outcomes
  • Use of automated systems versus human review
  • Law enforcement and government requests
  • Measures against bots, spam, and coordinated platform manipulation
  • Risk assessment and mitigation activities for high-risk categories

3. Content Moderation Metrics

The figures below are placeholders for the pre-launch phase and will be populated with live data as Vibble scales. They are provided here to define the reporting structure that will be used for ongoing transparency.

  • Total posts created during the period: 0
  • Total posts reviewed by automated systems: 0
  • Total posts reviewed by human moderators: 0
  • Total posts removed or restricted: 0
  • Accounts warned: 0
  • Accounts temporarily suspended: 0
  • Accounts permanently banned: 0

4. Enforcement by Violation Category

To comply with regulatory expectations and provide meaningful transparency, we break down content removals and account actions by policy category:

  • Harassment & targeted abuse: 0
  • Hate speech & protected-class attacks: 0
  • Child safety & CSAM-related violations: 0
  • Self-harm, suicide & dangerous challenges: 0
  • Threats, violence & extremism: 0
  • Graphic & sensitive media violations: 0
  • Misinformation (elections, health, crises): 0
  • Impersonation & account authenticity issues: 0
  • Spam, bots, and platform manipulation: 0
  • Intellectual property & copyright violations: 0
  • Other policy categories (combined): 0

5. Automated Detection vs Human Moderation

Vibble uses a layered moderation system combining automation with human oversight. We disclose how often decisions are initiated or executed by automated tools versus human moderators:

  • Actions initiated by automated systems: 0
  • Actions initiated by user reports: 0
  • Actions initiated by internal review / safety sweeps: 0
  • Actions requiring human-only review (no automation): 0
  • Automated actions later overturned by human appeal: 0

No permanent bans are issued purely based on automation; final determinations for severe penalties always involve a human review process.

6. User Reports & Response Performance

User reports are critical for surfacing harmful content that may evade proactive detection. We track volume and response times to ensure effective, timely intervention.

  • Total user reports received: 0
  • Reports related to harassment & hate: 0
  • Reports related to child safety: 0
  • Reports related to misinformation: 0
  • Reports related to spam/bots: 0

Response time targets:

  • High-priority safety (child safety, CSAM, violent threats): < 24 hours
  • Standard content violations: 24–72 hours
  • Complex or cross-jurisdictional cases: up to 7 days

7. Appeals, Reversals & User Redress

Due process is a key component of platform fairness. Vibble provides appeal mechanisms for most enforcement actions and monitors outcomes to improve policy clarity and enforcement quality.

  • Total appeals submitted: 0
  • Appeals processed within service-level target: 0
  • Appeals resulting in full reversal: 0
  • Appeals resulting in partial modification (penalty reduced): 0
  • Appeals upheld (original decision confirmed): 0

Appeal analytics are used to refine moderator training, adjust AI thresholds, and update policy language where rules are frequently misunderstood.

8. Law Enforcement & Government Requests

Vibble cooperates with law enforcement and government authorities only where requests are lawful, properly scoped, and consistent with international human-rights principles. We track:

  • Total law enforcement data requests: 0
  • Emergency life-threat requests: 0
  • Preservation orders: 0
  • Court orders / warrants for content data: 0
  • Requests denied as invalid or overbroad: 0

Users are notified of data disclosures where possible, unless prohibited by law or where notification would create a significant risk of harm or impede an ongoing investigation.

9. Bots, Spam & Platform Manipulation

As a real-time, public conversation platform, Vibble faces elevated risk from bots, spam, and influence operations. This section covers enforcement in that domain:

  • Automated / bot accounts removed: 0
  • Networks removed for coordinated inauthentic behavior: 0
  • Spam campaigns disrupted: 0
  • Accounts penalized for fake engagement: 0

10. High-Risk Content & Special Handling

Certain areas receive additional scrutiny because of their potential to cause serious harm or regulatory concern, including:

  • Election and civic-process content
  • Public-health information during emergencies
  • Child safety and CSAM-related material
  • Extremism, terrorism, and violent organizations

For these categories, Vibble may apply extra labeling, restricted virality, elevated human review, and specialized escalation workflows.

11. Algorithmic Systems & Ranking Transparency

Vibble’s recommendation, ranking, and trends systems are documented in the Algorithmic Transparency Summary and Algorithmic Accountability Policy. This Transparency Report cross-references those documents and provides:

  • High-level description of key ranking signals
  • Disclosures of signals we do not use (e.g., race, religion, credit data)
  • Information on user controls (resetting personalization, opting out where possible)
  • Clarification of when content may be down-ranked or labeled instead of removed

12. Risk Assessments & Regulatory Alignment

Vibble conducts structured risk assessments in line with EU DSA systemic risk obligations and similar frameworks. These assessments focus on:

  • Child safety and grooming risk
  • Disinformation and civic-process risks
  • Algorithmic amplification and echo-chamber risks
  • Data protection, security, and abuse vectors

Findings feed into our mitigation roadmap, including product changes, policy updates, and enhancements to human and automated review systems.

13. Methodology, Audits & Limitations

Numbers in this report may be adjusted after internal audits, bug fixes in logging pipelines, or classification corrections. Where material changes occur, we provide:

  • Revision notes explaining any metric adjustments
  • Updated totals in the next reporting cycle
  • Clarification of known limitations or gaps

Vibble may also commission or cooperate with external audits to validate key aspects of our enforcement, ranking, and safety systems.

14. Contact & Regulatory Liaison

For questions, regulatory engagement, or formal requests relating to this Transparency Report:

Transparency Office (Vibble): transparency@vibble.to
Compliance & Regulatory Affairs: compliance@vibble.to
Legal (Nexa-Group): legal@nexa-group.org
Safety & Integrity Team: safety@vibble.to

15. Updates to this Transparency Framework

This Transparency Report template and underlying methodology may be updated to reflect new laws, regulatory interpretations, platform features, and industry best practices. Revised versions will include an updated effective date and, where relevant, a changelog summarizing key modifications.

Was this answer helpful? 0 Users Found This Useful (0 Votes)