Artificial intelligence (AI) is rapidly reshaping the economy, workforce, and the way governments and businesses make decisions. While many of these changes are already underway, AI appears to be progressing faster than the legal profession is able to regulate. Despite the numerous benefits AI offers, it has yet to prove reliable in Canadian courts, whether used by self-represented litigants or legal professionals. Further development and regulation is needed.
In 2024, courts and tribunals increasingly scrutinized the role of AI in legal proceedings. These rulings highlight that while AI adoption is expanding, its effectiveness in litigation remains constrained. These decisions also reinforce the need to approach AI-driven decision making with caution, given the risk of biased outcomes. Although AI regulation is still in its early stages, legislative discussions throughout 2024 indicate that more comprehensive regulations may be on the horizon.
AI’s role in legal research and its use in court proceedings
While AI holds promise for enhancing legal research and writing, its use in the legal field comes with significant challenges. A major concern is the issue of “hallucinations” (AI-generated false information, including fabricated case law and legal principles). General-use AI tools exhibit hallucination rates between 58% and 82%, while even AI systems designed specifically for legal research still produce errors between 17% and 33% of the time.[1]
The persistence of these inaccuracies has led courts and legal organizations to issue strong warnings about the risks of relying on AI in legal proceedings. As underscored by multiple 2024 rulings, those who disregard these cautions often find that AI weakens, rather than strengthens, their legal position.
Consequences of AI-generated errors
When using AI to identify case law in support of a legal position, it is essential to verify its accuracy. In a recent decision by the Canadian Industrial Relations Board (CIRB), a self-represented complainant had cited 30 cases across their 125-page legal submission. [2] Shockingly, only two of these cases were real. While the CIRB lacked the statutory authority to award costs against the complainant, it emphasized how AI-generated misinformation can damage a party’s credibility and undermine the reliability of their arguments.
Courts have also considered whether AI-generated hallucinations could justify cost awards. In Zhang v. Chen, a Supreme Court of British Columbia (BCSC) application, the applicant’s counsel submitted AI-generated case law, which was later withdrawn before the hearing.[3] However, by that time, the respondent’s counsel had already spent significant resources addressing the inaccuracies.
The BCSC ultimately declined to award special costs, noting that the applicant’s counsel had not acted with intent to mislead and had already faced substantial negative publicity. However, the court held the lawyer personally responsible for the additional costs incurred due to the AI-generated errors.
The reliability of AI as evidence
While some parties attempt to rely on AI-generated case law, others have turned to AI-generated results as evidence. However, the British Columbia Civil Resolution Tribunal (BCCRT) recently rejected the use of AI-generated research being relied upon as evidence in legal proceedings.
In Yang v. Gibbs, the applicant claimed the respondent had been unjustly enriched by receiving both an e-transfer and a cheque. To support this argument, the applicant submitted findings from ChatGPT, which suggested that the respondent’s email address and the one receiving the e-transfer were likely linked to the same device or local network.
The BCCRT, however, noted that ChatGPT itself acknowledged the uncertainty of its conclusion. The tribunal emphasized that AI-generated information is inherently unreliable and, as a result, assigned no weight to the applicant’s AI-based evidence.[4]
AI bias in employment practices
AI has revolutionized human resources management, particularly in recruitment, by enhancing efficiency and streamlining applicant screening. However, growing evidence suggests that AI-assisted hiring can reinforce biased employment practices, including discrimination based on race and gender.[5]
While AI-driven discrimination in recruitment has yet to be the subject of litigation, some jurisdictions are taking legislative steps to address these concerns. Beginning January 1, 2026, Ontario’s Employment Standards Act will require employers with 25 or more employees to disclose their use of AI in hiring. [6] This mandate ensures that any employer using AI to screen, assess, or select candidates must clearly state this in their job postings, promoting transparency and accountability in AI-assisted recruitment.
Canada’s AI regulations
In 2022, Canada’s federal government introduced Bill C-27,[7] which includes the Artificial Intelligence and Data Act (AIDA) – a proposed framework to regulate AI systems and prohibit harmful practices associated with their use.[8] However, with the recent prorogation of parliament, which terminated the current session, Bill C-27 has been dropped from the order paper.[9] In the meantime, the European Union has passed the EU AI Act, which will undoubtedly have an impact on Canadian organizations carrying on business in Europe or developing tools for the European market, and is likely also to be a major influence on future Canadian regulations in this area.
Currently, Canada has a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, to which a number of organizations have signed on. We will be closely monitoring national and international developments regarding specific AI regulations that may impact Canadian organizations in the future.
Key takeaways and practical implications
Judgements rendered in 2024 reinforce the challenges of reliably using AI in the legal field. While AI-assisted legal research tools are widely available, their outputs require thorough verification. In legal proceedings, individuals must exercise caution and ensure the accuracy of AI-generated information as failure to do so can weaken a legal position or result in additional costs due to reliance on false information.
Although AI is not yet subject to specific federal regulations in Canada, existing federal privacy legislation, the Personal Information Protection and Electronic Documents Act, and substantially similar provincial privacy legislation such as the Personal Information Protection Act in Alberta apply to the personal information processed by AI tools. Emerging case law suggests that organizations using AI for decision-making should implement safeguards to prevent the inadvertent delegation of authority to AI systems. Further, employers leveraging AI in human resources should proactively address potential biases in AI-assisted recruitment. While litigation over AI-driven hiring discrimination has yet to arise, the increasing reliance on AI in recruitment could soon lead to legal disputes.
Finally, it is possible that in the future, Parliament will re-introduce specific AI legislation. Should this occur, businesses should consider taking proactive steps to ensure they can meet proposed regulatory requirements. Should you have any questions, or have a civil or commercial dispute, please do not hesitate to contact a member of Miller Thomson’s Commercial Litigation group.
[1] Varun Magesh et al, “Hallucination-Free ? Assessing the Reliability of Leading AI Legal Research Tools” (June 6, 2024) [unpublished, archived at Stanford Law School] DOI: https://doi.org/10.48550/arXiv.2405.20362.
[2] Choi and Lloyd’s Registrar Canada Ltd., Re,2024 CIRB 1146, online: https://decisia.lexum.com/cirb-ccri/cirb-ccri/en/item/522390/index.do> [Choi].
[3] Zhang v Chen,2024 BCSC 285.
[4] Yang v Gibbs (dba D & G Cedar Fencing), 2024 BCCRT 613 at paras 21-22.
[5] Zhisheng Chen, “Ethics and discrimination in artificial intelligence-enabled recruitment practices” (2023) 10:567 Humanit Soc Sci Commun DOI: https://doi.org/10.1057/s41599-023-02079-x
[6] Bill 149, Working for Workers Four Act, 2024, SO 2024, c 3 [Bill 149]. Employment Standards Act, 2000, SO 2000, c 41 at 8.4(1).
[7] Artificial Intelligence and Data Act, being part 3 of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2022 (second reading 24 April 2023), online: <www.parl.ca/DocumentViewer/en/44-1/bill/C-27/ first-reading> [perma.cc/QE5C-YW6W] [AIDA].
[8] AIDA at cl 4.
[9] Ibid at cl 12.