AI TRANSPARENCY STATEMENT

SportBooster – Enterprise Artificial Intelligence Governance Framework

Version 1.0

Effective Date: 20.2.2026

Milkam s.r.o., operator of the SportBooster platform (“Operator”), is committed to responsible, transparent and lawful deployment of artificial intelligence systems.

This AI Transparency Statement describes how AI is used within SportBooster, the safeguards in place, and user rights relating to AI-assisted processing.


1. Scope of AI Use

SportBooster may use artificial intelligence technologies to support platform functionality, including:

  • Training recommendations
  • Performance analytics
  • Fraud detection
  • Content moderation
  • Communication filtering
  • Personalization features
  • Risk scoring and anomaly detection
  • System optimization

AI systems are supportive tools designed to assist users and administrators. They do not replace independent human judgment.


2. Nature of AI Outputs

AI-generated outputs:

  • are probabilistic in nature
  • may contain inaccuracies or incomplete information
  • are based on available input data
  • are not guaranteed to be error-free

AI-generated recommendations do not constitute:

  • medical advice
  • legal advice
  • financial advice
  • certified professional coaching

Users remain responsible for evaluating AI-generated suggestions before acting upon them.


3. Transparency of AI Interaction

Where interaction with AI systems may not be reasonably obvious, users are informed that they are interacting with AI-powered features.

AI-generated content, moderation actions, or recommendations are clearly identified where applicable.


4. Risk Classification (EU AI Act Alignment)

SportBooster does not deploy AI systems classified as “high-risk” under the EU AI Act for purposes such as:

  • biometric identification,
  • credit scoring,
  • automated legal or judicial decision-making,
  • predictive law enforcement,
  • social scoring.

AI systems within SportBooster are primarily limited-risk or minimal-risk support systems.

Should regulatory classification change, this document will be updated accordingly.


5. AI and Minors

Where AI systems interact with accounts of minors:

  • parental control mechanisms apply,
  • stricter content moderation thresholds are activated,
  • enhanced filtering mechanisms are used.

AI systems are designed to avoid generating inappropriate or harmful content.

Heightened safeguards are applied to protect minors.


6. Automated Decision-Making and GDPR Article 22

Certain AI systems may perform automated analysis, including:

  • suspicious activity detection,
  • spam identification,
  • abuse monitoring,
  • risk scoring.

Where automated processing produces legal or similarly significant effects, users may:

  • request human intervention,
  • contest automated decisions,
  • request meaningful information about the logic involved,
  • seek review of account restrictions.

Critical account restrictions are subject to review mechanisms.


7. Human Oversight

SportBooster maintains human oversight over AI-assisted systems.

Human review may be triggered in cases of:

  • account suspension,
  • fraud detection,
  • content removal disputes,
  • high-risk anomaly detection.

AI systems support — but do not independently control — final enforcement actions.


8. Data Usage in AI Systems

AI systems operate under strict data governance principles:

  • data minimization,
  • purpose limitation,
  • access control restrictions,
  • logging and monitoring.

User-specific data is not used to train external third-party AI models unless explicitly permitted by law or user consent.

Personal data processing related to AI is governed by the Privacy Policy and, where applicable, the Data Processing Agreement.


9. Supplementary Safeguards

The Operator implements technical and organizational safeguards including:

  • encryption in transit,
  • controlled access permissions,
  • role-based access control,
  • anomaly detection logging,
  • periodic system audits,
  • bias monitoring procedures.

10. Prohibition of Manipulative Practices

AI systems are not designed to manipulate user behavior through deceptive, coercive, or exploitative techniques.

SportBooster does not deploy subliminal, psychologically manipulative, or socially exploitative AI mechanisms.


11. Monitoring and Continuous Improvement

AI systems are continuously evaluated for:

  • unintended bias,
  • technical reliability,
  • regulatory compliance,
  • emerging legal requirements.

This AI Transparency Statement may be updated to reflect regulatory changes, including implementation of the EU AI Act.


12. Governance and Accountability

AI governance is overseen by internal compliance controls aligned with:

  • GDPR
  • EU AI Act (where applicable)
  • Consumer protection law
  • Platform safety standards

AI governance is integrated with the broader legal framework of SportBooster, including:

  • Terms of Service
  • Privacy Policy
  • Data Processing Agreement
  • International Data Transfer Addendum

13. User Inquiries

For AI-related inquiries or requests concerning automated decision-making, please contact:

[insert compliance email]


This AI Transparency Statement forms part of the global compliance framework of the SportBooster platform.