Mitigating the risks of generative artificial intelligence in the health industry

( Disponible en anglais seulement )

29 septembre 2023 | Kathryn M. Frelick, David Krebs, Safa Warsi

Background

The emergence of generative Artificial Intelligence (AI) as a paradigm-shifting technology and its potential impact on the health industry cannot be overstated.  While AI has been and is increasingly used in Canada for both clinical and non-clinical applications, particularly in the areas of diagnostics, imaging, virtual care, disease surveillance and research, the emergence of generative AI opens up a vast world of new possibilities.

Unlike traditional AI or machine learning which can learn from data and make decisions or predictions based on that data, generative AI systems can create new and original content through algorithms that learn from large datasets and mimic patterns or relationships in that data.  Prompts on virtually any topic can be entered.  Responses generated through these systems can be eerily human-like.  New and innovative ways to incorporate generative AI are being contemplated to improve administrative efficiencies, clinical documentation and electronic health record systems, patient communication and education, public health, patient simulation and training, clinical decision support and treatment, and research and development.  There are many considerations for health industry organizations looking to design, adopt, and implement AI solutions for specific purposes.

Emergence of generative AI in the public domain

The public release of generative AI products, such as OpenAI’s ChatGPT in late 2022, has created intense interest in these tools.  Governments and organizations are grappling with the ethical and practical implications of generative AI and the lack of a robust regulatory framework[1], that is sufficiently targeted and capable of managing the scope of concerns that AI poses.

On September 27, 2023, the Minister of Innovation, Science and Industry, announced Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (the “Code”), which is effective immediately. The Code identifies measures that organizations are encouraged to apply to their operations when they are developing and managing advanced generative AI systems capable of generating content.  The Code is intended as a critical bridge between now and the adoption of the proposed Artificial Intelligence and Data Act which was introduced as part of Bill C-27 in June 2022.  The Code outlines measures that are aligned with six core principles:

  • Accountability: Organizations will implement a clear risk management framework proportionate to the scale and impact of their activities.
  • Safety: Organizations will perform impact assessments and take steps to mitigate risks to safety, including addressing malicious or inappropriate uses.
  • Fairness and equity: Organizations will assess and test systems for biases throughout the lifecycle.
  • Transparency: Organizations will publish information on systems and ensure that AI systems and AI-generated content can be identified.
  • Human oversight and monitoring: Organizations will ensure that systems are monitored and that incidents are reported and acted on.
  • Validity and robustness: Organizations will conduct testing to ensure that systems operate effectively and are appropriately secured against attacks.

The Code is based on the input received from a cross-section of stakeholders, including the Government of Canada’s Advisory Council on Artificial Intelligence, through the consultation on the development of a Canadian code of practice for generative AI systems.[2]   The Code will also help reinforce Canada’s contributions to ongoing international deliberations on proposals to address common risks encountered with large-scale deployment of generative AI, including at the G7 and among like-minded partners.

What do health industry organizations need to do now?

The Code provides a helpful framework for health industry organizations that are looking to develop and manage generative AI systems and tools, however, health industry organizations need to take proactive steps now to ensure that their staff, contractors, and providers are aware of the risks of generative AI that is already available in the public domain.  It is essential to take steps to protect your confidential business assets and sensitive information, including personal information and personal health information.

What are some of the risks of generative AI systems?

Confidentiality and Privacy: generative AI systems are trained on vast datasets of text, images, or other data and can be used to generate novel content in a wide variety of different forms and contexts.  This raises significant concerns about ownership of data and intellectual property, as well as privacy and security of personal information.  In May of 2023, the Privacy Commissioner of Canada announced a joint investigation with several provincial privacy authorities, into ChatGPT to determine whether any violations of privacy legislation may have arisen.[3]

There are privacy and security risks associated with open-source AI tools that retain user input data to further train themselves. This means that any information that is publicly accessible or that is input into an open-source platform, including confidential business information, personal information, or personal health information, places that data into the public machine learning algorithm, exposing that information to other users.  It may not be possible to remove this data once it has been disclosed.

Although open-source generative AI tools have warnings about sharing confidential, personal, private, or privileged information, this remains a significant organizational risk.  Health industry organizations must ensure that adequate controls are in place to protect confidential information and personal information.  Organizations need to be aware of their obligations, including under privacy legislation, to address the evolving risks associated with the use of generative AI.[4]  Unless using a closed system, users must assume that any information they input could enter the public domain.

Reliability issues: one of the risks of generative AI technology is accuracy and the potential for incorrect or even fabricated responses (sometimes called hallucinations), which look and seem real.  In a recent study conducted by the Mayo Clinic, researchers asked ChatGPT medical questions and asked it to provide answers with the corresponding references. They found that the answers contained many factual errors. The researchers also noted many of the references provided, were fabricated, although they looked deceptively relevant. Such reliability issues should be considered when using generative AI technology.

Malicious activity: generative AI systems may be vulnerable to malicious use, for example, fraudulent use of the system to impersonate real individuals (deepfakes), phishing attacks or other forms of cybercrime, malicious or buggy code, misinformation/disinformation or poisoned datasets.  For example, generative AI can be prompted to create malware such as ransomware, increasing the risk of proliferation of attacks.

Ethical Concerns and Bias: there may be a lack of transparency with generative AI systems in the algorithm itself, or the training data and source code may not be publicly available, which can have ethical implications. There may be biases embedded in the system which can have adverse impacts on fairness and equity through the perpetuation of those biases and that can unintentionally lead to skewed data and results.

Following are some practical steps that health industry organizations can take to mitigate the risk associated with evolving generative AI.

What can health organizations do to mitigate the risk of generative AI?

  • Establish work groups: develop a multi-functional team or teams dedicated to ensuring the identification, assessment, and management of risks associated with generative AI technologies on an enterprise basis. Consider including subject matter experts such as IT, privacy, risk, human resources, finance, research, innovation offices, clinical, and operational supports, with senior leadership and the board of directors providing oversight and strategic direction. Some organizations have established senior AI officers or leadership positions to be responsible for the management of generative AI risk and compliance or this can be incorporated into existing portfolios.
  • Board governance and oversight: generative AI poses unique opportunities and risks. Boards of directors need to understand their oversight responsibilities and incorporate AI into their enterprise risk management frameworks.  Consider the need for board education and regular updates as these issues continue to evolve.
  • Establish generative AI guidelines or policy: develop a framework for the use of generative AI within the organization, ethical and legal considerations, applications and limitations. Consider whether these guidelines will apply internally or if they extend to external contractors and vendors.
  • Update acceptable use policy: update your acceptable use policies to ensure that individuals understand whether generative AI technology can be used with organizational systems and in what circumstances, as well as any limitations or approvals that are necessary. For example, generative AI may be permissible to assist with preliminary drafting or communications and to help with tone, length, and level of detail for writing, but should never be used with confidential, sensitive or personal information, including personal health information.
  • Update privacy policy: health industry organizations must ensure that privacy policies, notices and public statements reflect your organizational position on the use of generative AI as it evolves. Consider consent requirements when collecting, using or disclosing personal information or personal health information in connection with generative AI technology.
  • Prioritize education and training: management, staff and contractors should be made aware of the risks of generative AI and the organization’s policies regarding its use. If staff are expected to incorporate AI technology into their day-to-day duties, they should be made aware of the best way to use the technology and how to minimize risk and liability for themselves and the organization.
  • Conduct privacy impact and security assessments: Privacy impact assessments and security assessments are essential when introducing any new technology involving personal information, however, it is important to ensure that organizations have effective security controls in place to address publicly accessible generative AI. For example, will staff have the ability to access generative AI tools through organizational systems and are there mechanisms in place to audit this use?
  • Human oversight and monitoring: human oversight and monitoring of AI systems is critical to ensure that systems are developed, implemented, and used ethically and safely, and to ensure that robust measures and testing are in place to ensure that generative AI is not misused.

Miller Thomson can assist your organization with ensuring that you are protected from the risks of emerging generative AI technologies. Please reach out to a member of our Health Industry group if you have any questions.


[1] Health Canada has been dealing with AI and machine learning in the context of medical devices for some time.  On September 18, 2023 it published Draft guidance: Pre-market guidance for machine learning-enabled medical devices which is open for input until October 29, 2023.  Health Canada has developed previous guidance, for example, Software as a medical device and Good machine learning practices for medical device development: Guiding principles.

[2] Innovation, Science and Economic Development Canada released Canadian Guardrails for Generative AI – Code of Practice for consultation on August 16, 2023 which formed the basis the Code.

[3] See OPC to investigate ChatGPT jointly with provincial privacy authorities

[4] As an example, Ontario’s Personal Health Information Protection Act, 2004 (PHIPA) requires that health information custodians take steps that are reasonable to ensure that their agents do not collect, use or disclose personal health information other than as authorized by the custodian.  Health information custodians remain responsible for personal health information, regardless of whether their agents have complied with their obligations under PHIPA.  As it relates to personal health information, health information custodians must ensure that their agents know when generative AI is authorized and in what circumstances, if at all.

Avis de non-responsabilité

Cette publication est fournie à titre informatif uniquement. Elle peut contenir des éléments provenant d’autres sources et nous ne garantissons pas son exactitude. Cette publication n’est ni un avis ni un conseil juridique.

Miller Thomson S.E.N.C.R.L., s.r.l. utilise vos coordonnées dans le but de vous envoyer des communications électroniques portant sur des questions juridiques, des séminaires ou des événements susceptibles de vous intéresser. Si vous avez des questions concernant nos pratiques d’information ou nos obligations en vertu de la Loi canadienne anti-pourriel, veuillez faire parvenir un courriel à privacy@millerthomson.com.

© Miller Thomson S.E.N.C.R.L., s.r.l. Cette publication peut être reproduite et distribuée intégralement sous réserve qu’aucune modification n’y soit apportée, que ce soit dans sa forme ou son contenu. Toute autre forme de reproduction ou de distribution nécessite le consentement écrit préalable de Miller Thomson S.E.N.C.R.L., s.r.l. qui peut être obtenu en faisant parvenir un courriel à newsletters@millerthomson.com.