EU Adopts AI Act – Key Components and Next Steps for Organizations
The EU has adopted a comprehensive regulation on artificial intelligence (AI) that aims to foster innovation, ensure trustworthiness, and protect fundamental rights. The regulation sets out harmonized rules and obligations for providers, deployers, and users of AI systems in the EU, as well as third-country actors whose AI systems affect the EU market or affected persons in the EU. The regulation also establishes governance structures and enforcement mechanisms at the EU and national levels and introduces a system of fines for non-compliance.
The AI Act introduces a significant set of rules for organizations that develop, market, or use AI systems in the EU, providers, and deployers of AI systems outside the EU where the output produced by the AI system is used in the EU, and authorized representatives of providers not established in the EU. These organizations must comply with various requirements depending on the risk level and the intended use of their AI systems. The regulation also creates influential opportunities for these organizations to participate in consultations, standardization, and certification processes, establishing legal certainty and a level playing field.
Key components of the regulation
AI systems and risk-based approach. The regulation defines AI systems as machine-based systems with varying levels of autonomy and adaptiveness, capable of influencing physical or virtual environments. The regulation applies a risk-based approach to AI systems, distinguishing between four categories: prohibited, high-risk, limited-risk, and minimal-risk.
- Prohibited AI practices: The regulation bans certain AI practices that are considered to contravene EU values and fundamental rights, such as manipulative AI systems that distort human behavior, systems that exploit vulnerabilities, and real-time remote biometric identification systems in publicly accessible spaces for law enforcement, except in narrowly defined situations. The regulation also prohibits the use of subliminal techniques or emotion recognition in AI systems, unless they are used for health or research purposes and with appropriate safeguards.
- High-risk AI systems: The regulation identifies specific AI systems as high-risk based on their intended use in certain areas or sectors, such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, asylum, border control management, administration of justice, and democratic processes. The regulation also empowers the European Commission to amend the list of high-risk AI systems by delegated acts, taking into account the advice of the European Artificial Intelligence Board (AI Board).
Providers of high-risk AI systems must ensure compliance with a set of requirements related to risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity. Furthermore, deployers of high-risk AI systems must ensure that they are used in accordance with the instructions and the intended purpose, and that they are subject to monitoring and evaluation. Interestingly, the risks referred to here concern only those that may be reasonably mitigated or eliminated through the development or design of a high-risk AI system, or the provision of adequate technical information. Prior to deploying specific new or comprehensively updated high-risk AI systems, certain deployers should perform a fundamental rights impact assessment (FRIA) for high-risk AI-systems in accordance with the requirements. The FRIA adds to the GDPR data protection impact assessment should there be an impact on personal data by introduction of the AI system. Moreover, most providers must establish a quality management system, maintain documentation and automatically generated logs, and report serious incidents.
- Transparency obligations for certain (limited-risk) AI systems: The regulation introduces transparency obligations for AI systems that interact with natural persons or generate content, requiring clear disclosure when individuals are interacting with an AI system or exposed to content generated or manipulated by an AI system. The regulation also requires that users of AI systems that generate or manipulate content take appropriate measures to prevent the dissemination of misleading information.
- General-purpose AI models (minimal or no risk): The regulation does not impose any specific obligations for general-purpose AI models that pose minimal or no risk to health, safety, or fundamental rights, but encourages the use of voluntary codes of conduct and standards to promote ethical and trustworthy AI.
Conformity assessments and CE marking. The regulation also establishes a system of conformity assessment for high-risk AI systems, which can be either self-assessment by the provider or third-party assessment by a notified body, depending on the type and the use of the AI system. The regulation introduces a CE (EU conformity) marking for high-risk AI systems that comply with the requirements, and an EU database for high-risk AI systems listed in Annex III of the act, with registration obligations for providers and deployers.
Governance and enforcement mechanisms. The regulation sets up governance structures and enforcement mechanisms at the EU and national levels, involving various actors and stakeholders. The regulation further creates the AI Board, composed of representatives of the national competent authorities and the Commission, to facilitate consistent application and provide technical expertise. Notably, the regulation also establishes an advisory forum, consisting of representatives of civil society, industry, and academia, to advise the AI Board and the Commission on AI matters.
National supervisory authorities. The regulation designates national competent authorities for supervising the application of the regulation and requires EU member states to ensure that these authorities have adequate resources, expertise, and powers. The regulation also empowers market surveillance authorities to enforce the regulation, with powers to investigate and take action against non-compliant AI systems, such as issuing warnings, imposing corrective measures, or withdrawing products from the market.
Substantive penalties. The regulation introduces a system of penalties for non-compliance, which can range from EUR 15 million or 3% of total worldwide annual turnover for infringements of the requirements for high-risk AI systems, to EUR 35 million or 7% of total worldwide annual turnover for prohibited AI practices, whichever is higher. The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request will be subject to administrative fines of up to EUR 7.5 million or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher. In the case of small and medium enterprises (SMEs), including start-ups, each fine will be either the percentage or amount referred to here, whichever is lower.
The regulation also provides for judicial remedies and redress for individuals or legal entities that are affected by non-compliant AI systems.
Next steps for organizations
The regulation introduces new obligations for organizations that develop, market, or use AI systems in the EU, as they will have to comply with various requirements depending on the risk level and the intended use of their AI systems. Nevertheless, the regulation also creates opportunities to participate in consultations, standardization, and certification processes and to benefit from the legal certainty and the level playing field that the regulation provides, at least in the EU. Undoubtedly, other regions will follow, if the example of the EU’s GDPR is any indication.
The following points are the next steps each organization should consider to comply with the AI Act:
- Review the regulation and determine if their AI systems fall under the scope of high-risk AI systems or prohibited AI practices, and comply with the relevant requirements.
- Register their high-risk AI systems in the EU database (once the database is set up) and report any serious incidents or malfunctions to the national competent authorities.
- Establish or update their quality management systems and documentation processes to meet the new standards for high-risk AI systems and prepare for conformity assessment and CE marking.
- Draw up technical documentation of a high-risk AI system before that system is placed on the market or put into service and keep this documentation up to date. The technical documentation should be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements of the AI Act and meets the minimum requirements set out in Annex IV of the Act.
- Ensure transparency in the interactions of their AI systems with individuals and in content generation or manipulation, and disclose the use of subliminal techniques or emotion recognition where applicable.
- Monitor the use and performance of their AI systems and ensure human oversight and intervention where necessary, particularly if personal data is being processed.
- Be aware of the penalties for non-compliance and take steps to avoid prohibited AI practices and adhere to the requirements for high-risk AI systems.
- Participate in consultations and standardization efforts as relevant and follow the release of guidance and recommendations of the AI Board and the Commission, which will further the development of the EU’s expertise and capabilities in the field of AI through the AI Office with the support of the scientific panel of independent experts.
- Stay informed about the updates from the implementation dates for different provisions of the regulation and prepare accordingly.
The regulation will enter into force 20 days after its publication in the Official Journal of the EU, but some of its provisions will apply on different dates ranging from six months to 24 months from the regulation’s entry into force, which is expected in mid-to-late June 2024.
You can review the full text of the AI Act under this link.
For more information about this new regulation and its specific implications for the use of AI systems in your business or organization, please contact our experts with the multidisciplinary CMS TMC AI team.