Friendium AI-Generated & Manipulated Media Policy

This policy governs the creation, sharing, labeling, and enforcement of AI-generated, synthetically altered, or digitally manipulated media on Friendium. As a real-identity social network operated by Nexa-Group, Friendium prioritizes authenticity, trust, personal dignity, and protection from deception and harm.

1. Purpose & Core Principles

Advances in artificial intelligence enable the creation of highly realistic images, videos, audio, and text that can mislead, impersonate, or harm individuals and society. This policy exists to protect users from deception, identity abuse, misinformation, harassment, and reputational damage.

Friendium is built on real-identity interactions. Any use of AI that undermines trust, authenticity, or informed consent is incompatible with the platform.

2. Scope of This Policy

This policy applies to all content types, including but not limited to:

  • AI-generated images, videos, audio, and text
  • Deepfakes and face/body swaps
  • Voice cloning and synthetic speech
  • AI-edited photos or videos
  • Digitally altered screenshots or recordings
  • Profile photos, cover images, and avatars
  • Private messages, comments, stories, and live content

3. Definitions

AI-Generated Media refers to content created wholly or partially using machine learning, generative models, or automated synthesis tools.

Manipulated Media includes edited or altered content that changes the meaning, context, or perceived reality of the original material.

Deepfake refers to synthetic media that realistically depicts a person saying or doing something they did not actually say or do.

4. Absolute Prohibitions

The following content is strictly prohibited and may result in immediate suspension or termination:

  • Deepfakes of real individuals without explicit consent
  • AI content impersonating private individuals
  • Synthetic sexual or intimate content involving any person
  • AI-generated content involving minors (zero tolerance)
  • Manipulated media intended to harass, extort, or threaten
  • Fabricated evidence or falsified recordings
  • Political or civic deepfakes designed to mislead voters

5. Consent & Identity Protection

Users may not create or share AI-generated representations of identifiable individuals without clear, documented consent.

This includes:

  • Face swaps
  • Voice cloning
  • Body manipulation
  • Simulated actions or speech

6. Labeling & Disclosure Requirements

AI-generated or heavily manipulated media must be clearly disclosed. Misleading omission of AI involvement may constitute a violation.

  • Labels must be visible and understandable
  • Disclosures must not be hidden or obscured
  • Hashtags alone may be insufficient

7. Permitted Uses of AI

The following uses may be allowed when transparent and non-deceptive:

  • Artistic or creative illustrations
  • Satirical or parody content clearly labeled
  • Educational demonstrations
  • Accessibility tools (e.g., AI captions)
  • Cosmetic photo enhancements that do not alter identity

8. Misinformation & Deception Risks

AI-generated content may not be used to:

  • Fabricate news or events
  • Misrepresent real-world incidents
  • Spread false health or safety information
  • Manipulate public opinion through deception

9. Harassment, Defamation & Abuse

AI may not be used to:

  • Humiliate or degrade individuals
  • Create false accusations
  • Simulate illegal or immoral behavior
  • Enable harassment campaigns

10. Political, Civic & Public Trust Protections

During elections or sensitive civic periods, Friendium applies heightened scrutiny to AI-generated content.

  • Political deepfakes are prohibited
  • Manipulated speeches or statements are not allowed
  • Content may be removed regardless of labeling

11. Detection & Enforcement

Friendium employs a combination of automated detection, human review, and third-party tools to identify manipulated media.

Enforcement actions may include:

  • Content removal
  • Labeling or warning overlays
  • Reach reduction
  • Account warnings or strikes
  • Temporary or permanent suspension

12. User Reporting

Users are encouraged to report suspected AI manipulation. Reports involving impersonation, minors, or public harm are prioritized.

13. Preservation & Legal Compliance

Content may be preserved for:

  • Legal obligations
  • Law enforcement requests
  • Safety investigations
  • Regulatory compliance

14. Appeals Process

Users may appeal enforcement decisions related to AI-generated content. Appeals are reviewed by trained moderation and policy staff.

15. Platform Integrity & Future Updates

As AI technologies evolve, Friendium may update this policy, detection methods, and disclosure requirements to address emerging risks.

16. Responsibility of Creators

Users remain fully responsible for AI-assisted content they share. Use of third-party tools does not reduce accountability.

17. International & Legal Alignment

This policy aligns with emerging global frameworks, including AI governance principles, online safety regulations, and privacy laws.

18. Contact

Trust & Safety: safety@friendium.com
Legal: legal@friendium.com
User Support: support@friendium.com

這篇文章有幫助嗎? 0 用戶發現這個有用 (0 投票)