SCANDIC DATA's AI ethics statement

Introduction & Objectives

SCANDIC DATA uses artificial intelligence (AI) and algorithmic systems in various areas: predictive maintenance and failure prediction for servers, dynamic load balancing, energy and cooling management, security analysis (e.g. anomaly detection, DDoS defense), customer support (chatbots) and optimization of supply chains for hardware and energy. As part of the SCANDIC GROUP, we are committed to using AI responsibly, transparently and in accordance with fundamental rights. This AI Ethics Statement defines the principles, processes and control mechanisms for the development, procurement, operation and use of AI at SCANDIC DATA.

The company

SCANDIC ASSETS FZCO
Dubai Silicon Oasis DDP
Building A1/A2
Dubai - 342001
United Arab Emirates

Phone +97 14 3465-949
Mail Info@ScandicAssets.dev
represents the brand and is supported by the:

SCANDIC TRUST GROUP LLC
IQ Business Center
Bolsunovska Street 13-15
Kyiv - 01014, Ukraine

Phone +38 09 71 880-110
Mail Info@ScandicTrust.com
represents.

is a cooperation partner:

LEGIER Beteiligungs mbH
Kurfürstendamm 14
10719 Berlin
Federal Republic of Germany

HRB 57837 - VAT ID DE 413445833
Phone +49 (0) 30 99211-3469
Mail: Office@LegierGroup.com

SCANDIC ASSETS FZCO and LEGIER Beteiligungs mbH are non-operational service providers; operational activities are carried out by SCANDIC TRUST GROUP LLC.

The declaration is based on the upcoming EU AI Act, the GDPR, the PDPL, industry-specific cloud and telecommunications regulations and international best practices. The aim is to promote innovation while making risks manageable in order to secure the trust of customers, employees and society.

Overview library

– 1. basic values & guiding principles
– 2. governance & responsibilities
– 3. legal & regulatory framework
– 4. data ethics & data protection
– 5. transparency & explainability
– 6. fairness, bias & inclusion
– 7. human-in-the-loop & critical decisions
– 8. safety & robustness
– 9. sustainability
– 10 AI in data center operation
– 11. training, awareness & culture
– 12. monitoring, audit & continuous improvement


1. basic values & guiding principles


– People-centeredness: AI serves to support people. Applications must respect the dignity and autonomy of customers.employees and partners. Decisions that have a significant impact (e.g. termination of contracts due to security incidents) are made by people. - Legal conformity: All AI systems comply with the applicable laws (GDPR, PDPL, EU AI Act, telecommunications and energy industry law). Prohibited AI practices (e.g. biometric mass surveillance) are excluded. - Responsibility & Accountability: Responsible persons are appointed for each AI system. Decisions are traceable, contestable and can be reviewed by humans. Documentation and audit trails enable seamless tracking. - Proportionality: The use of AI is proportionate to purpose and risk. High-risk applications (e.g. automatic network shutdown in the event of a security alarm) are subject to strict control mechanisms. - Transparency: Usersare informed when AI is used. The principles and functions of the systems are explained in an understandable way. - Fairness & inclusion: AI must not discriminate against anyone on the basis of origin, gender, age, disability or other sensitive characteristics. Bias is actively identified and minimized. - Security & resilience: AI systems are hardened against manipulation and misconduct. Security incidents are proactively reported and analyzed. - Sustainability: The ecological footprint of our AI infrastructure is taken into account. Energy-efficient models, resource-saving hardware and sustainable operation of the data center are standard.


2. governance & responsibilities


An AI Ethics Board at Group level monitors all AI initiatives. It is made up of experts from Legal, Data Protection, IT Security, Data Center Technology, Operations and Human Resources. This board reviews new AI projects, assesses risks and approves high-risk applications. Internal guidelines regulate the use of AI and integrate with existing compliance, data protection and supply chain guidelines. An owner is appointed for each project to manage the development, operation and monitoring of the system (RACI model). The SCANDIC GROUP's ESG Committee ensures that AI issues are integrated into the corporate strategy and reports to the Management Board and the Advisory Board.


3. legal & regulatory framework


SCANDIC DATA complies with all relevant legal standards:

– EU-AI-Act: For high-risk applications (e.g. biometric access controls, automated incident response), we carry out impact assessments, document data sources, training methods, performance metrics and continuously monitor compliance. - GDPR, PDPL and ePrivacy rules: We only use data with a legitimate basis for AI-supported analyses and personalization. Personal data is minimized, pseudonymized or anonymized. - IT and security regulation: We are guided by ISO standards (ISO/IEC 27001, 27017, 27701), the NIST framework, BSI IT baseline protection and industry-specific standards for data centers and cloud providers. - Labor and supply chain laws: AI-supported systems are not used to monitor employees unless this is legally permissible and proportionate. In the supply chain context, we use AI for risk analyses without using discriminatory criteria.


4. data ethics & data protection


Responsible handling of data forms the basis of our AI strategy. We ensure that training and production data:

- are collected in a lawful manner; - comply with the principle of data minimization; - are free from discriminatory bias; - are stored and processed in a secure environment; - in the case of sensitive data (e.g. health information) are only used with explicit consent.

In addition, data origins are documented so that it can be traced which sources have been incorporated into a model. When using synthetic data or generative models, we mark the content as AI-generated.


5. transparency & explainability


We label AI interactions (e.g. chatbots, recommendation systems) clearly and unambiguously. Users receive understandable information about how a system works, the main factors that go into a decision and how they can request a human review if necessary. When making decisions about security measures (e.g. automatically blocking an IP address), we disclose which criteria are relevant without revealing trade secrets. Decision-making processes are logged to enable internal or external audits.


6. fairness, bias & inclusion


Our algorithms are systematically tested for discrimination. This includes statistical analyses of training data, diversified test groups and regular checks during operation. We use procedures to detect unequal treatment (e.g. disparate impact analyses) and correct models accordingly. Particular attention is paid to vulnerable groups; systems must not allow cognitive weaknesses to be exploited. When managing capacity and setting prices for cloud services, we ensure that customers are not arbitrarily disadvantaged.


7. human-in-the-loop & critical decisions


Automated systems must not replace human expertise when it comes to safety or business-critical decisions. In the event of emergency measures, contract terminations due to breaches or significant configuration changes, AI-supported proposals are always reviewed by qualified persons. An escalation and override mechanism („human-in-the-loop“) is in place to ensure that humans retain ultimate control.


8. safety & robustness


All AI systems are hardened against adversarial attacks, prompt injection, data poisoning and other manipulation attempts. We use defense-in-depth strategies: Access restrictions, strict authentication, encryption, continuous monitoring and red team testing. Security incidents are immediately investigated, documented and resolved. Incident response plans establish reporting chains and define countermeasures. Models are regularly updated to close known vulnerabilities.


9. sustainability


The development and operation of AI consumes resources. SCANDIC DATA relies on energy-efficient model architectures, resource-efficient hardware (e.g. GPUs/TPUs with high performance per watt) and a scalable infrastructure. Our data centers are powered by renewable energy and optimize workloads to minimize power consumption and cooling requirements. Model updates and retraining are planned to avoid unnecessary computing work; old equipment is recycled or disposed of properly.


10 AI in data center operation


SCANDIC DATA uses AI specifically to make data center operations more efficient, secure and sustainable:

– Predictive maintenance: Machine learning models analyze sensor and operating data from servers, UPSs, air conditioning systems and network components to predict failure probabilities. Maintenance is planned before faults occur. - Energy and cooling management: AI algorithms optimize the power flow, load distribution between racks and the use of cooling systems to keep the Power Usage Effectiveness (PUE) below 1.25. - Safety analyses: Anomaly detection models recognize unusual access, DDoS patterns, data outflows or insider threats. The results are fed into our SIEM/SOAR platform. - Capacity planning: AI predicts the demand for compute and storage resources so that we can adjust hardware procurement, virtualization and network capacities in good time. - Chatbots and support: Intelligent assistants provide support with standard inquiries, ticket assignments and technical documentation. More complex questions are handed over to human employees.

The above-mentioned ethical principles are taken into account in all applications.


11. training, awareness & culture


SCANDIC DATA promotes a corporate culture in which employees question AI decisions and use technology responsibly. Training programs teach the basics of AI ethics, data protection, security awareness, data quality and how to deal with bias. Employees are encouraged to report anomalies and participate in the continuous improvement process. Managers ensure that ethical considerations are included in all phases of a project (design, development, deployment, operation).


12. monitoring, audit & continuous improvement


Our AI systems are continuously monitored. Key figures such as accuracy, fairness, false positive rates, energy consumption, CO₂ emissions and user satisfaction are included in regular reports. Internal and external audits check compliance with the AI policy. Findings from audits, user feedback and technical analyses are used to adapt models and processes. This declaration is updated at least once a year or in the event of significant changes.