Algorithmic Accountability Policy
This Algorithmic Accountability Policy describes how Vibble designs, deploys, audits, and governs its ranking, recommendation, trends, and safety models. It is intended to ensure responsible use of machine learning and to provide regulators, users, and partners with clarity on oversight and checks.
1. Purpose & Scope
Vibble uses algorithmic systems to rank posts, suggest accounts, surface trends, filter spam, and detect harmful content. This policy explains:
- What algorithmic systems we use and why
- How we mitigate bias and harmful outcomes
- What governance and oversight mechanisms exist
- How users can exercise control or seek recourse
The policy covers recommendation systems (feeds, “For You” timelines, follow suggestions), safety and moderation models (e.g., abuse detection), and integrity systems (spam, bot, and manipulation detection).
2. Algorithmic Systems Covered
Major classes of algorithmic systems at Vibble include:
- Recommendation & Ranking Models: determine which posts appear and in what order in feeds and Explore.
- Timeline & Trends Models: surface trending topics, hashtags, and conversations.
- Safety & Moderation Models: detect hate, harassment, self-harm, CSAM indicators, and policy violations.
- Integrity Models: identify spam, bots, coordinated inauthentic behavior, and engagement manipulation.
- Abuse & Fraud Detection Models: monitor suspicious logins, account takeovers, and payment fraud.
3. Design Principles
Vibble’s algorithmic systems are developed according to core principles:
- Safety-first: Preventing serious harm, abuse, and exploitation takes priority over engagement optimization.
- Fairness & Non-Discrimination: Systems are evaluated to reduce unfair impact on protected groups.
- Transparency: Users, regulators, and partners receive clear explanations of how algorithms influence outcomes at a high level.
- Human Oversight: High-risk decisions include human review, especially where penalties are severe.
- Continuous Improvement: Models are iterated and refined based on audits, appeals, and new safety signals.
4. Input Signals & Excluded Signals
Vibble’s recommendation and ranking systems may use signals such as:
- Post-level engagement (likes, replies, reposts, quotes, bookmarks)
- Watch/reading time and scroll depth
- User follow graph and interaction history
- Topic, language, and basic regional signals
- Quality and safety scores based on policy-compliance history
The following categories are not used as direct input signals for personalization:
- Race, ethnicity, or religion
- Sexual orientation or gender identity
- Political party membership
- Financial status or credit scores
- Health status or medical conditions
Some of these attributes may be inferred in aggregate during bias audits but are not used in production systems for targeting or ranking individual users.
5. Algorithmic Risk & Impact Assessments
Before major algorithmic changes are launched, Vibble conducts internal risk and impact assessments that:
- Identify potential harms (e.g., amplification of hate, misinformation, or harassment).
- Evaluate effects on vulnerable groups and protected characteristics.
- Consider regulatory obligations (e.g., EU DSA systemic risk analysis).
- Define monitoring metrics and guardrails for rollout.
High-risk features may be launched behind experiments, staged rollouts, or limited regions to monitor real-world impact before broader deployment.
6. Testing, Evaluation & Auditing
Algorithmic systems undergo structured evaluation, including:
- Offline Evaluation: Using historical and synthetic datasets to measure accuracy, precision, recall, and fairness metrics.
- Online Experiments (A/B tests): Limited deployments to measure impact on safety, engagement, and user satisfaction.
- Bias & Fairness Audits: Internal reviews to evaluate differential impact across languages, regions, or communities.
- Post-Incident Reviews: Additional audits when a system is implicated in safety or integrity incidents.
7. Mitigation Measures & Guardrails
When risk is identified, Vibble applies mitigation strategies, including:
- Downranking or limiting spread of borderline or sensitive content.
- Adding labels or interstitial warnings for context (elections, crises, health).
- Excluding certain content types from recommendations entirely.
- Boosting authoritative or verified sources for specific civic or health topics.
- Applying stricter thresholds in high-risk regions or during sensitive events.
8. User Controls, Transparency & Choice
Vibble provides user-facing controls to increase clarity and autonomy, such as:
- Ability to switch between a chronological timeline and a ranked timeline (where supported).
- Controls to mute keywords, topics, and accounts.
- Settings to limit sensitive media and adult content.
- Options to reset certain personalization signals over time.
Explanatory text accompanies key surfaces such as Trends, Explore, and recommendation modules, outlining the main factors influencing what users see.
9. High-Risk Topics: Elections, Public Health & Crises
Special handling applies for elections, public-health events, and crises:
- Stricter ranking controls and additional labels for disputed or misleading content.
- Partnerships with election authorities and trusted NGOs for signals and escalation.
- Downranking or removal of content that violates civic integrity or crisis misinformation policies.
- Priority weighting for authoritative information in search, trends, and Explore surfaces.
10. Human Oversight, Escalation & Governance
Algorithmic systems are overseen by cross-functional governance teams including Safety, Product, Engineering, Legal, and Compliance. Oversight mechanisms include:
- Formal approval checkpoints for high-impact model changes.
- Escalation paths for regulators and trusted partners to raise systemic concerns.
- Regular governance reviews of algorithmic performance, incidents, and user complaints.
- Integration of appeal and user-feedback data into future model improvements.
11. External Oversight & Regulatory Cooperation
Vibble engages with regulators and, where appropriate, external experts to improve accountability:
- Participation in regulatory consultations and code-of-practice processes.
- Providing systemic risk and mitigation summaries to competent authorities.
- Cooperation with independent auditors or academic partners where lawful and practical.
12. User Rights, Complaints & Redress
Users may:
- Appeal moderation decisions influenced by algorithmic systems.
- Report suspected unfair treatment, downranking, or visibility issues.
- Request additional information about recommendation logic at a high level.
Complaints relating specifically to algorithmic systems can be sent to:
Algorithmic Transparency & Accountability: algorithms@vibble.to
Safety & Integrity Team: safety@vibble.to
Compliance & Regulatory Affairs: compliance@vibble.to
13. Data Protection & Privacy Alignment
Algorithmic systems operate in compliance with Vibble’s Privacy Policy, the GDPR, and other applicable privacy regulations. Data minimization, purpose limitation, and security safeguards apply equally to training data, production inputs, and logs used for audits.
14. Updates to this Algorithmic Accountability Policy
As Vibble’s systems evolve and regulations develop, this policy will be updated. Material changes will include:
- Expanded descriptions of new or significantly changed models.
- Updated disclosures about user controls and signals used.
- Documentation of new governance processes or regulatory obligations.
The effective date and version history will be maintained to support audit trails and regulatory review.
15. Contact & Further Information
For further questions regarding algorithmic accountability, governance, or regulatory collaboration:
Algorithmic Governance (Vibble): algorithms@vibble.to
Transparency Office: transparency@vibble.to
Compliance (Nexa-Group): compliance@nexa-group.org
Legal: legal@vibble.to