On December 9, 2023, a provisional agreement on the EU Artificial Intelligence Act (“AI Act”) (“Provisional Agreement”) was reached between the Council presidency and the European Parliament. This groundbreaking Regulation aims to ensure the safety of AI systems in the EU and their adherence to fundamental rights and values. The AI Act is set to apply two years after it enters into force, with some exceptions for specific provisions.
The AI Act is designed to promote safe, trustworthy AI development and uptake across the EU. It introduces a risk-based approach to regulation, with stricter rules for higher-risk (impact) AI systems. The EU aims to be the global leader in the regulation of AI, similar to the way the General Data Protection Regulation (“GDPR”) has influenced and impacted global data protection and privacy laws since its coming into force some five years ago.
With that in mind, Canadian organizations, both developers and users of AI tools, would be well advised to be knowledgeable of the AI Act and in a position to monitor its implementation over the coming months and years, alongside monitoring the developments pertaining to Canada’s proposed Artificial Intelligence and Data Act (“AIDA”).
In addition to the discussions surrounding AIDA, there are notable developments in Canada. As we have reported previously, Canada has released a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, and the Office of the Privacy Commissioner of Canada (“OPC”) has released a joint statement with other provincial privacy commissioners regarding the responsible use of generative AI.
Definitions and Scope
The definition of AI systems plays a vital role in determining the extent of the AI Act. The European Union has always been committed to aligning with global standards. Therefore, the policymakers in the EU have based the final definition of AI systems on the latest version from the OECD. The proposed definition is:
Artificial intelligence system means software that is developed with [specific] techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with
In contrast, AIDA has the following definition:
Artificial intelligence system means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.
This discrepancy has been noted by Minister of Innovation, Science and Industry of Canada, François-Philippe Champagne among other recommended changes.
The Provisional Agreement on the AI Act specifies that the regulation is limited to areas within the scope of EU law and does not impact EU Member States’ competencies in national security. It also states that the AI Act does not apply to systems used solely for military or defence purposes. Additionally, the Regulation excludes AI systems dedicated to research and innovation and those used by individuals for non-professional purposes.
AI System Classifications
The Provisional Agreement on the AI Act establishes a framework of protection, categorizing AI systems based on their potential risk. The deal requires only light transparency obligations for AI systems that pose a low risk. This includes disclosing AI-generated content, enabling users to make informed decisions about its use.
In contrast, a wide range of high-risk AI systems will be permitted access to the EU market, but only if they fulfill specific requirements and obligations. These requirements, refined for technical feasibility and reduced burden, encompass aspects like the quality of data and the necessary technical documentation, especially for small and medium-sized enterprises (SMEs).
Additionally, the Provisional Agreement delineates the responsibilities and roles of various actors in the AI systems’ value chains, including 1) providers and 2) users of systems. It integrates these roles with existing obligations under other EU legislation, such as data protection and sector-specific laws.
Systems posing unacceptable risks
The agreement also identifies specific uses of AI that pose unacceptable risks and thus bans them within the EU. This includes practices like:
- cognitive behavioural manipulation,
- indiscriminate scraping of facial images on the internet and CCTV footage,
- emotion recognition in workplaces and educational institutions,
- social scoring,
- biometric categorization for sensitive data inference, and
- certain types of predictive policing.
Specific Provisions/Exceptions for Law Enforcement and General-Purpose AI
The provisional agreement on the AI Act includes specific considerations for law enforcement. The agreement acknowledges their unique needs for law enforcement, allowing for high-risk AI tools in urgent situations while ensuring fundamental rights are protected. It permits real-time remote biometric identification in public spaces under strict conditions, like searching for victims of serious crimes or preventing imminent threats.
The agreement introduces new provisions for versatile general-purpose AI (GPAI) systems, which can be integrated into other high-risk systems. Specific rules are established for foundation models and large AI systems with broad capabilities. These models must meet transparency obligations, and a stricter regime is applied to high-impact foundation models with significant systemic risk potential.
Governance Structure
A new governance structure will also be set up, including an AI Office within the Commission to oversee advanced AI models, supported by a scientific panel and an AI Board comprising member state representatives. This board acts as an advisory body and coordinates the implementation of the regulation.
Penalties
Penalties for non-compliance are significant, with fines based on a percentage of the company’s global annual turnover or a predetermined amount, whichever is higher. Provisions are made for more proportionate penalties for SMEs and startups. Individuals and legal entities can file complaints regarding non-compliance, which are to be handled by relevant authorities.
Transparency
The agreement emphasizes the importance of a fundamental rights impact assessment before deploying high-risk AI systems. Increased transparency is required, especially for public entities using high-risk AI systems, and there is an obligation to inform people when they are subjected to emotion recognition systems.
The Provisional Agreement modifies provisions to support innovation to create a more innovation-friendly legal framework. This includes AI regulatory sandboxes for real-world testing of AI systems under specific conditions and safeguards. Actions are outlined to support smaller companies, with particular derogations to reduce administrative burdens.
Should you have any further questions or concerns, please feel free to reach out to a member of Miller Thomson’s Cybersecurity team.