On July 12th, 2024, the European Union’s Artificial Intelligence Act (the “AI Act” or the “Act”) was published in the Official Journal of the European Union, making it the world’s first comprehensive legislation regulating artificial intelligence (“AI”) technologies. The AI Act will apply to any business or organization developing or using AI tools in Europe and will enter into force across all Member States on August 1st, 2024. The coming into force of the AI Act will occur in stages, with different implementation periods for specific provisions. The majority of provisions will become effective on August 2nd, 2026.

Implementation timeline

An overview of when the new rules set out in the AI Act will begin to apply is provided below:

  • Publication July 12th, 2024: The AI Act was published in the Official Journal of the European Union, serving as formal notification of the new law and confirming the dates for compliance.
  • Coming into force August 1st, 2024: The AI Act formally enters into force 20 days after being published in the Official Journal.
  • Stage I on February 2nd 2025: Chapters I and II of the AI Act, which outline general provisions, definitions, and place prohibitions on AI systems that present unacceptable risk, will be enforceable. Examples of AI systems that will be prohibited due to unacceptable risk are systems that aim to manipulate or deceive people in order to distort human behavior, systems that classify people using social scoring, systems that check the moods of employees at work and predictive policing.
  • Stage II on August 2nd, 2025: Rules in Chapter III, Section 4 regarding Notifying authorities and notified bodies, Chapter V regarding General-purpose AI models (GPAI), Chapter VII regarding Governance, and Chapter XII regarding Penalties will all become enforceable. The most notable of these are the provisions that set out requirements for GPAI, which are AI systems that are designed to perform a wide range of intelligence tasks, think abstractly and adapt to new situations. GPAI includes generative AI tools such as ChatGPT, Gemini (formerly Bard) and Llama. Some of the new requirements for GPAI will include mandatory notification procedures, documentation requirements and responsibilities surrounding cybersecurity and mitigation of systemic risk.
  • Fully applicable on August 2nd, 2026: This marks the default date that provisions of the AI Act come into effect. This includes high-risk AI systems listed in Annex III, such as AI in recruiting and managing staff, AI in critical infrastructure, AI involving biometrics and AI used to provide access to services such as credit scoring and eligibility for emergency health care.
  • Final stage on August 2nd, 2027: All provisions of the AI Act become applicable for all risk categories, including regulations for high-risk systems listed in Annex I which are systems subject to existing EU health and safety legislation such as medical devices, radio equipment and agricultural vehicles.

Concerns and uncertainties

Some argue that many of the obligations under the AI Act are drafted with relatively vague terms which has caused concern amongst businesses as detailed guidance surrounding the Act is not likely to emerge for several months. For example, the Act only contains high-level descriptions of the requirements for each of the four risk categories. This may lead to businesses seeking clarity around essential details of the Act as they seek to develop compliance programs in this regard.

What does this mean for Canadian businesses?

The AI Act regulates global entities who operate within the European Union, which means that Canadian companies who operate in the EU or provide AI systems to users in the EU must abide by the Act and should therefore remain aware of the obligations under it. If the AI Act applies, Canadian companies must assess their AI system to determine if it is prohibited, high-risk, or if it constitutes GPAI, and follow the respective rules for the risk-category that their AI system falls into.

There is currently no regulatory framework in Canada specific to AI. However, Canada has proposed the Artificial Intelligence and Data Act (AIDA), which was introduced as a part of Bill C-27, the Digital Charter Implementation Act, 2022.  AIDA would set the foundation for responsible design, development and deployment of AI systems that impact the lives of Canadians, but has been under significant scrutiny due to lack of clarity. For example, AIDA purports to apply to “high-risk” AI systems, but no definition of what it means to be “high-risk” is provided. AIDA also fails to establish rules pertaining to the use of AI by government or law enforcement.

Although Bill C-27 successfully passed its second reading in April 2023, it has been stagnant ever since. The adoption of AIDA as law in Canada is not expected to happen before significant changes have been made to it, and there is a possibility that AIDA will be withdrawn from Bill C-27 and given a full public consultation before being re-billed.  Despite this, Canadian companies should ensure that they are meeting current compliance requirements and staying up to date with new and evolving regulations.

If you have any questions about AI legislation and how your organization may be impacted, please contact our Privacy, Data Governance and Cybersecurity team.