A Gap in the Canadian Regulatory Framework for Health-Adjacent Artificial Intelligence Solutions

Published On: September 2023Categories: 2023 Editorial Series, ChatGPT, Editorials


Jacob Hutton

UBC Faculty of Medicine in the BC Resuscitation Research Collaborative

PhD student

Centre for Advancing Health and the Data Science and Health Research Cluster

research trainee

Headshot of a white man in a blue zip-up sweater

In the health sector, existing regulations may not apply to AI solutions focused on health service delivery, and this is likely where the benefits and risks are largest.

Chat GPT and associated technologies presents a stress test for Canadian policy makers beset by a number of intersecting pressures and incentives.  On one hand, many public sector entities in Canada are dealing with crises of human resources, increased demand, and legacy infrastructure, which is leading to a deficit in service delivery and challenges to deliver on political promises to improve services for Canadians. On the other hand, a recent boom in Artificial Intelligence (AI) technologies comes with the promise of AI-based solutions for many public sector issues, using cutting edge methods to increase the efficiency and quality of many public sector services. 

Perhaps the sector where the pressures are the highest is in healthcare, which has led to the Canadian Medical Association releasing a statement last year on the critical need for more resources to deal with increased wait times and access to care; highlighting AI technologies as a potential solution to this crisis. 

While innovation in the AI sector has been progressing steadily for the past decade, the impact of Chat GPT on the public conversation is unprecedented. Accordingly, there has been a gold rush of existing companies and new startups seeking to apply these technologies to pressing societal issues and to reap the requisite rewards, with healthcare representing a major focus of these efforts. Language based technologies such as Chat GPT are well suited to be integrated into solutions that increase the efficiency of care delivery, such as through improving the efficiency of routine processes such as charting, and chat bots focused on helping patients navigate the care seeking journey. Small companies such as Glass Health and mega players such as Microsoft are integrating LLMs into clinician charting processes and claim to free up hours per day of clinician time. Companies such as Gyant equip health systems with AI-powered virtual assistants focused on enhancing the patient journey in accessing care. It is inevitable that such solutions will soon knock on the doorstep of Canadian health systems, promising solutions to our identified needs related to care efficiency and access to care. When they do, how will Canadian policy makers assess their claims of efficacy? 

A robust regulatory scheme for health technologies exists in Canada that addresses pharmaceutical interventions, medical devices, and specific medical software products. In this framework, software that is used for direct medical applications is regulated as a health product, however it is unclear if technologies like ChatGPT fall into this category. Health Canada’s Guidance Document for Software as a Medical Device (SaMD) states that if a software is “not intended to acquire, process, or analyze a medical image or signal”, “is intended to display, analyze, or print medical information about a patient or other medical information”, is “only intended to support” provider decision making, and “not intended to replace … clinical judgment”, then the software would not be classified as a medical device and as such would not require evaluation by Health Canada. Apart from an additional jointly released document titled: “Good Machine Learning Practice for Medical Device Development: Guiding Principles”, released with the US Food and Drug Administration (FDA) and the UK Medicines and Healthcare products Regulatory Agency (MHRA), no specific guidance on appraising AI products exists for Health Canada.

In parallel to this health specific regulation, Canada has a federal regulatory scheme that outlines steps for the responsible use of AI for general use. There is a new federal procurement process and impact assessment framework specifically for AI products that signifies the federal government recognizes the need for specific processes for appraising AI solutions. However, this framework lacks the rigor of the type of framework provided by Health Canada for medical products, and places most of the responsibility on the software vendor for compliance with a set of responsible AI principles. This creates a situation where care-adjacent solutions focused on applying AI to increase the efficiency of care may be subjected to less rigorous evaluation pre-implementation than products that fall explicitly under the purview of Health Canada.

In medicine, provision of a treatment or procedure represents a fraction of what constitutes the patient experience. Broader factors such as access to care, timely delivery of services, and clinician face time all play a role in healthcare quality, patient satisfaction, and value-based care. Technologies such as Chat GPT represent a unique opportunity to streamline many of these processes. improving these processes may improve the patient experience, but also has the potential to harm the Canadian healthcare system and create difficulties for patients and providers if they are implemented prior to undergoing a rigorous assessment as is standard for traditional medical interventions. While such solutions may not fall under the current Health Canada regulatory framework, their potential risks and benefits to the quality and efficiency of care provision highlight a need for a more stringent set of requirements prior to the investment of public dollars into their implementation, which may require clarification or amendment to the existing Health Canada policies.