The medico-legal lens on AI use by Canadian physicians

A deep dive

A network of bright green lights

Published: September 2024

Table of contents


Purpose of this paper:

Artificial intelligence (AI) has the potential to support physicians in a broad range of clinical tasks, reducing administrative burden and improving patient safety. However, the evidence base for clinical applications is still developing, and the scale, scope, and speed of AI’s innovation has challenged governments, regulators, and the courts to keep pace. In this context, the adoption of AI faces a number of medico-legal challenges.

To support the safe adoption of AI, this paper aims to assist understanding of some of the unique characteristics of AI technology; proposes more focused discussions around specific AI tools using a risk based approach; and clarifies the current regulatory environment and gaps. A framework of responsibilities and accountabilities and a call to action for key stakeholders is proposed, to foster the responsible use of AI within this evolving paradigm.

Introduction

The proliferation of AI is expected to significantly impact how patients receive care and how physicians practise medicine. The rapid evolution and adoption of AI technologies may present a source of significant medico-legal risk, given AI’s evolving functional capabilities and diverse applications within the healthcare domain. The pace of development necessitates a conversation about the next steps in AI, to address key gaps in the regulatory environment and facilitate appropriate adoption.

Recognizing the potential impact of AI on physician practice and healthcare delivery, the College of Physicians and Surgeons of Alberta (CPSA) and the Canadian Medical Protective Association (CMPA) jointly hosted a symposium in November 2023, “Innovation, Safety, and AI: Developing regulatory and medico-legal approaches to AI in Canadian healthcare.” This event brought together leaders from healthcare and other disciplines from across Canada for a discussion on the opportunities and challenges posed by AI. The collective goal was to explore how Canada’s regulatory environment and medico-legal approaches can support the safe, responsible deployment of AI in healthcare, without becoming undue barriers.

This symposium represents but one of the sources of insight and information that CMPA has sought out to better understand AI and its impact on the work of physicians.

This paper reflects the medico-legal landscape at a point in time (mid-2024) when AI technology, its uses, and healthcare are evolving. It addresses the following issues in an effort to advance the discussion on AI:

  • Untangling “AI”: The term AI refers to a wide spectrum of technologies and applications. The current generalization of these technologies impedes practical discussions about adoption, implementation, and risk management. An assessment of specific applications and adoption of a risk-based approach will help define practical approaches for the safe adoption of AI.
  • Framing stakeholder responsibilities: The pace of change presents challenges for regulators, AI developers, and healthcare providers, contributing to medico-legal risks. There is a lack of clarity regarding roles and accountabilities. This paper outlines the current regulatory framework, identifies areas of uncertainty, and calls for concerted action to facilitate a discussion on next steps.

Untangling “AI”

AI encompasses a range of technologies with diverse applications. A key insight from the CPSA/CMPA symposium is that broad definitions, such as “a machine’s ability to perform cognitive functions we usually associate with human minds”1 are unhelpful. These definitions focus on how AI may replace human cognition. They are less helpful in differentiating AI from other computer systems, and may act as a barrier in understanding the short- to medium-term risks of AI.

CMPA’s analysis indicates that different AI tools present different risk profiles. A key step to managing liability is not to lump all AI applications together, but rather to focus on how the technologies are used.2

There are many definitions of AI across academic literature, international and national law, and regulations. Another way to approach AI technologies is to understand that they generally have the following characteristics:

  • Data is provided as input
  • There are specific objectives for the technology to achieve
  • The technology has a level of autonomy to achieve the objective
  • Output is information, such as a command to move a robotic arm up or down, produce content, or find the solution to a problem3

Characteristics of AI

Understanding the unique characteristics of AI technology, distinct from other computer systems, is important for facilitating its safe adoption.

1: Data is central to AI.

AI relies on large datasets for training and operation. AI systems take data as input, process it, and provide data as output. In a clinical context, examples of these outputs might include a potential diagnosis, a summary of a patient encounter, or a recommended course of action.

Generative AI is a type of AI that creates entirely new outputs from given inputs, unlike traditional software that produces outputs by referencing pre-existing material. While generative AI is a landmark technological evolution, other types of AI applications like predictive modeling also hold substantial value.

Key insight:

Privacy/cybersecurity: The use and processing of data raises concerns from a privacy and cybersecurity perspective, including access to data, authorization for processing the data, data retention, and safeguards.

2: AI can “learn.”

Machine learning, a core branch of most modern AI, involves algorithms that learn from data to identify and capture complex, layered, and non-linear patterns. The AI is trained on these data. This capability allows AI to adapt and improve based on new information. Machine learning modules often work dynamically within AI systems, enabling continuous learning and evolution in real-world settings. This means that AI models that have the same code and algorithms will have different outputs if their training data are different.

Large Language Models (LLMs) are a recent breakthrough in generative AI, excelling at language-related tasks like understanding context, answering questions, summarizing texts, and creating content. LLMs now handle a variety of inputs beyond text, including audio, video, and data files.

Key insight:

Model evolution: Since AI models learn from new data, AI models will change over time, potentially increasing accuracy and reliability. Conversely, models might “drift”, meaning that they become less accurate and reliable as the data it is now trained on may be lower quality or less representative. Continuous monitoring and updating of AI models are essential to ensure their reliability and relevance.

3: AI is complex and opaque.

AI technologies often produce outputs in ways that are not easily understood by the people observing or using their work. One reason is that some of the core underlying algorithms, called “neural networks,” identify patterns in the data that are difficult to explain. Generative AI can also generate outputs that are wholly fabricated, known as “hallucinations”, and present this false information authoritatively.

Tools and methods for assessing AI accuracy and reliability are still developing.345 The complexity of AI outputs complicates these assessments. For instance, even though binary outputs (e.g. true/false) have established accuracy metrics, such as false positive/negative rates, the reasoning behind a particular output may not always be clear. Evaluating more complex outputs, such as AI-generated medical notes, can be even more complex, since there may not be a single 'correct' approach or answer.

Key insight:

Explainability: One significant challenge is the "black box" nature of AI, where the reasoning process of the algorithm is not transparent. This opacity complicates the validation of outputs and the assessment of the algorithm's reliability and stability.

AI applications in healthcare

AI systems can be deployed to achieve a number of objectives in many settings. An examination of these different uses provides a more practical understanding of the distinctive risks, limitations, and benefits of these technologies, and the steps required for assessment and mitigation of potential harms depending on the use case.


Category Examples
Clinical and medical purposes6

Analysis of medical imaging
Image guided surgery
Personalized medicine

Administrative and operational uses

Administrative support of a healthcare facility
Synthesizing, summarizing, or generating health records
Supporting resource allocation and prioritization

Patient and consumer uses Maintaining or encouraging a healthy lifestyle, such as general wellness apps6
Knowledge translation, knowledge generation, research and development

Physician information or physician education
Health research or drug development
Knowledge management

Public health Public health and public health surveillance (health promotion, disease prevention, surveillance and emergency preparedness, outbreak response, etc.)

With any specific medical application of AI technologies, medico-legal risks can be further modulated by the tool’s potential impact on patient safety, and its level of autonomy and decision making. Generally speaking, more risk should be expected when working with tools that have a greater impact on patient safety, as well as tools that operate with greater autonomy.



Lower autonomy Higher autonomy
Higher patient impact

MODERATE RISK

  • Automated emergency department triage
  • Automated patient monitoring
  • Sorting test results in those that need clinical review

HIGHER RISK

  • Automated analysis of medical images
  • Automated mental health chatbots
  • Robot assisted surgery
Lower patient impact

LOWER RISK

  • Clinical communication and workflow including patient registration, scheduling visits, voice calling, video calling
  • Autogenerated patient education materials
  • Patient / consumer general wellness apps
  • Knowledge management / Medical literature discovery
  • Autogenerated clinical documentation, including scribing

MODERATE RISK

  • Providing recommendations to healthcare professionals
  • AI-enhanced EMRs, including summarization and search

Physicians and stakeholders can take steps to mitigate the potential risks, such as:

  • Reducing the degree of autonomy of the AI applications and building in human-in-the-loop decision-making processes at one or multiple points in the care pathway;
  • Requiring higher levels of regulatory scrutiny or approval for higher risk systems with greater potential to impact patient care;
  • Requesting necessary information and higher levels of transparency from higher risk AI developers or vendors with respect to the tool’s performance, limitations, risks, and required mitigation measures;
  • Considering guidelines or policies for use of specific categories of AI tools within particular care settings.

Overall, physicians and other healthcare providers may have to recognize that in the current environment, AI tools are intended to aid and augment, not replace clinical judgment. Mitigation measures required to address the risk of harm will vary depending on the use case.

Potential risks and harms

Awareness of AI's characteristics and diversity of applications help inform the identification and assessment of required measures for the safe adoption of AI. This knowledge can also help guide a better understanding of required changes to healthcare and the associated medico-legal risks, including in relation to civil liability, privacy, and human rights.

Civil liability

The changing regulatory environment and the diverse range of AI applications are expected to complicate the assignment of liability in the event of patient harm, particularly given the evolving nature of these technologies and inconsistent adoption, guidance, and practices. Cases could be brought on various grounds, including due to inappropriate reliance by healthcare professionals, flaws built into algorithms, or inadequate selection and maintenance of AI tools.

Claims involving use of software (other than AI) may provide some insight into the potential consideration of AI use by physicians. These cases, coupled with existing principles of liability currently faced by physicians, mean that individual healthcare providers who use AI systems may find themselves defending new types of AI claims.

This raises questions about the extent of the obligations on healthcare providers and institutions who rely on AI. For instance, should healthcare professionals critically assess the reliability of the AI technologies they use? The practical challenges underscore the importance of effective regulatory oversight.

Privacy and data protection

In the current privacy landscape, the rules and standards present questions about AI technologies, as legislation attempts to balance innovation, transparency, and responsible AI development.

When applied in the healthcare context, in addition to privacy requirements related to personal information, a number of specific privacy-related issues may arise. These include authorization to use the training datasets required for AI models; collection and use of new data to update or fine-tune AI models; use of patient information when interacting with AI; and requirements for consent, de-identification, or anonymization of data in each of these cases. Privacy legislation is increasingly imposing additional obligations with respect to transparency, in cases where AI makes or recommends a particular decision and individuals request a review of that decision.

Another key element for physicians and healthcare providers to understand is whether the model is trained on the information provided by physicians or patients while it is in operation. For example, an AI scribe may continually update its model based on the physician/patient interactions it records. Without proper consent, this may pose privacy risks to the physician.

Human rights

AI technology can also create exposure to liability risks and uncertainty in other areas of the law. For example, the development and use of AI creates legal risk concerning human rights (particularly as it relates to equality, equity and non-discrimination). Bias can come from many sources. Datasets used to train AI can have critical gaps, be skewed or contain false or out-of-date information. The design process of the AI may be biased (e.g. the designers failing to consider the issues of a specific group). Bias can also be present in how the AI is deployed, implemented, operated, and maintained.3

There may be unconscious bias and unintentional discrimination in the training data that develops AI systems, which can then extend historic harms in the form of biased output. Evidence of these types of concerns have been seen in some popular chatbots and LLMs found to perpetuate racist ideas and debunked medical ideas.78 There are also the risks that the outputs of an AI system, particularly a generative AI system, could disseminate actively harmful, false, misleading, or poor information9. If it is found that healthcare providers are using AI tools that recommend discriminatory practices, whether unintentional or not, their use of the AI tool could form the basis of a human rights claim.

It is highly likely that all AI models will have some bias.10 To mitigate bias, it is critical to recognize that it can and does occur, even in the absence of harmful intent or discrimination. Bias is not new to clinical decision-making, as human decision-making is biased as well. It is important for all those designing, developing and using these tools to know that the bias exists, and to take appropriate measures to mitigate the risk of harm.

Intellectual property

There are also uncertainties with respect to AI systems and intellectual property (IP). Generative AI products in particular raise new concerns, given the large training sets required to train them. These datasets may have been scraped from websites or social media, and so used or incorporate copyrighted material in their responses, potentially exposing those involved to claims of copyright infringement, if they use or duplicate that material without permission.11

A framework for stakeholder responsibilities

The integration of AI in healthcare presents uncertainties in regulatory oversight and liability. At present, healthcare providers have limited guidance on evaluating or mitigating the risks associated with AI tools. While physicians play a pivotal role in delivering patient care according to their professional duties, coordinated efforts from other stakeholders are crucial to foster trust and facilitate the safe adoption of AI.

Regulatory gaps & fragmentation

Despite significant advancements in recent years, a comprehensive regulatory framework for AI remains a work in progress. This issue is not unique to Canada, as reported recently by the Federation of State Medical Boards: "the regulatory framework for AI...is complex and yet still largely underdeveloped both in the United States and globally."12

The Government of Canada has tabled Bill C-27, Digital Charter Implementation Act, 2022, which would introduce a standalone regulatory framework for AI (the Artificial Intelligence and Data Act or AIDA). AIDA proposes a new regulatory framework for AI systems that would require persons responsible for “high impact AI systems” to assess and mitigate risks of harm and bias. Although many of the details remain to be prescribed by regulators, the requirements will likely have broad application to the healthcare sector, including with respect to the use of personal health information, the need for AI impact assessments, risk mitigation, transparency, training and validation standards, continuous monitoring, risk management, data governance, documentation, human oversight, accuracy, robustness, and cybersecurity.

Provincial governments are also amending their legislation to respond to AI. Quebec has passed An Act to modernize legislative provisions as regards the protection of personal information (Bill 64), which requires organizations to provide notice of AI-assisted decisions to the individual, and, upon request, the personal information used in the decision and the factors that led to the decision. The legislation also affords individuals a right to have the decision of an automated decision system reviewed by a human being.

Health Canada has expanded its authority to license software as a medical device (SaMD)6. It is a risk-based approach, such that software meant to monitor, assess, or diagnose a condition that could result in immediate danger is required to meet more stringent licensing and monitoring requirements. While adaptive or continuously learning devices may not yet be approved, Health Canada has released draft guidance to regulate these tools. However, software that serves only an administrative purpose is excluded from regulation. Similarly, software will not require approval if it is not intended to analyze a medical image from an in vitro diagnostic device; intended to display, analyze, or print medical information; intended only to support a healthcare professional in prevention, diagnosis, or treatment decisions; and not intended to replace clinical judgment.

Recognizing the expanding use and availability of AI tools, Colleges and professional associations in Canada have begun to issue preliminary, high-level guidance on the use of AI. While many of these guidelines serve to extend existing professional duties, focusing on accountability, transparency, and consent, some regulators are also suggesting healthcare providers be required to validate AI tools prior to their use.

The manner in which these obligations will be applied remains to be seen, but it is important that any regulatory frameworks for AI establish responsibilities that are in proportion to the level of influence each role has on the risks associated with AI development and use.13

To clarify the roles of key stakeholders and encourage collaborative efforts towards the responsible integration of AI, a proposed framework of responsibilities and accountabilities is outlined below.

1. Developers & vendors

  • Ethical AI development: Ensure AI systems are designed ethically, with fairness, transparency, and inclusiveness in mind.
  • Regulatory compliance: Develop AI systems that comply with existing healthcare regulations and standards, including those relating to safety, privacy, and the mitigation of harm and bias.
  • Validation and testing: Conduct rigorous testing and validation of AI systems to ensure they are safe, reliable, and effective; and test to mitigate bias.
  • Reporting and transparency: Communicate and report openly and clearly to regulators and healthcare providers on the performance and safety of AI systems, including across population subgroups.
  • Continuous monitoring: Monitor AI systems post-deployment to address issues and maintain or improve performance.
  • Privacy and cybersecurity: Implement robust data protection measures to safeguard patient data against breaches and misuse.
  • Clear and necessary documentation and training: Provide comprehensive and accessible documentation of all of the above and training to users to ensure proper use of AI systems.

2. Regulators and legislators of AI in healthcare

  • Licensing and approval: License or establish requirements for AI systems before they enter the market, ensuring they meet safety, efficacy, bias mitigation, privacy, and cybersecurity standards.
  • Post-market surveillance: Establish requirements for ongoing monitoring and enforce these requirements post-market to ensure ongoing compliance.
  • Guidance and standards: Develop and update practical and actionable guidelines and standards for the development, approval, and use of specific categories of AI systems.
  • Reporting: Report to government and the public on the status and performance of AI systems, development and implementation challenges, and recommendations for addressing these issues.
  • Oversight and coordination: Oversee the integration and regulation of AI technologies across the healthcare system, and coordinate with other regulatory bodies to ensure cohesive regulation of AI technologies.
  • Policy development: Develop policies and frameworks for the ethical and responsible use of AI.
  • Public engagement: Engage with the public to address concerns and gather feedback on AI technologies in healthcare, and facilitate public consultations and forums on AI issues, policy, and developments.

3. Healthcare providers

  • Informed decision-making: Use AI systems in a way that enhances clinical decision-making while maintaining professional judgment.
  • Training and education: Engage in required training and education in the use of AI tools.
  • Integration: Integrate AI tools safely and appropriately in clinical workflow.
  • Patient safety: Prioritize patient’s best interests and report patient safety incidents to patients, developers, and regulators as appropriate and required, while respecting patient confidentiality.
  • Ethical use: Use AI systems ethically and transparently, respecting patient autonomy, privacy, and consent.
  • Compliance: Comply with relevant and applicable regulatory requirements and College standards.
  • Record-keeping: Keep accurate records of AI usage and outcomes, sharing data with regulators and developers as appropriate and required.
  • Feedback: Provide relevant feedback to developers on AI system performance and issues encountered.

4. Healthcare managers and leaders

  • Be accountable for responsible and ethical implementation of AI: Develop and publish steps taken to ensure systems acquired and implemented are transparent, fair, reliable, and safe.
  • Strategic and safe implementation: Oversee the integration of AI systems into healthcare operations, consistent with AI’s benefits, while managing its risks and ensuring safety.
  • Resource allocation: Allocate sufficient resources for the proper deployment, maintenance, monitoring and improvement of AI systems, and for appropriate technical support of deployments.
  • Training: Provide ongoing training on the appropriate use of AI systems to healthcare providers and staff.
  • Risk management: Implement risk management strategies to address potential risks of harm or bias with AI systems over their full life cycle.
  • Safe culture: Promote a culture of continuous learning and improvement in relation to AI technologies.

5. Medical associations/federations and specialty associations

  • Guidelines and best practices: Develop and disseminate guidelines and best practices for the use of AI in specific medical fields.
  • Education and advocacy: Provide and update targeted education and training resources on AI technologies and advocate for their responsible use.
  • Research and collaboration: Support research on the impact and validation of AI tools in healthcare and collaborate with other stakeholders to enhance AI integration.
  • Monitoring: Monitor the adoption and impact of AI within their specialties and provide feedback to regulators and developers.

Call to action

Given the lack of clarity on accountabilities; the complexities inherent in AI systems; the known and unknown risks, benefits, and opportunities; and that action taken now will reduce these risks, while augmenting potential benefits, CMPA encourages stakeholders to:

Reduce fragmentation: Develop and adopt practical and actionable guidelines and standards for the use of specific categories of AI tools. These standards should cover safety, efficacy, privacy, bias, and risk management. They should be applied across jurisdictions, regions and institutions, because fragmentation adds cost, deters developers from implementing solutions, and renders those solutions more diffuse and more difficult to maintain. Maintaining AI systems to multiple standards will be increasingly difficult given AI’s ability to learn and adapt.

Require transparency: AI vendors and developers should provide clear documentation and training on the intended purpose and use; use and protection of personal information; training data and known biases therein; types and specific sets of use cases including assumptions and limitations; testing and validation approaches and results including those to assess biases; performance metrics including pre- and post-deployment; and monitoring and update regimes. Developers and vendors should be partners to deployers and users, employing transparency to increase trust and improve systems.

Integrate stakeholders across the AI lifecycle: Best practices have healthcare providers, patients, management, and others participating at the design, development, validation, deployment, and monitoring stages. These viewpoints contribute to trust, robustness, and transparency, and accelerate critical discussions on the tradeoffs required to develop AI systems (e.g. accuracy vs. explainability; privacy vs. personalization; and defining acceptable levels of bias).

Convene to identify and address key gaps: The solutions could include curriculum development and education to increase AI literacy within healthcare; tools and processes to assess biases; system-wide capabilities to provide trusted third-party assessments; monitoring of AI systems, and more.

Next steps and conclusions

As AI technologies become increasingly integrated into healthcare, physicians face the responsibility of understanding the risks, benefits, and implications for their practice. However, the burden of accountability must continue to be shared with other stakeholders.

It is essential to remember that, although AI technologies are analogous in many respects to their non-digital counterparts, they require specific considerations for their implementation. The functional capabilities and data-driven nature of AI present novel considerations.

While this paper addresses some emerging issues in AI application in healthcare, it only scratches the surface. Topics like medical education, ethics, bias assessment, explainability, and data sovereignty demand further exploration.

As we navigate the early stages of AI integration, regulatory frameworks and oversight mechanisms will likely evolve. However, the potential benefits of AI in improving care and reducing administrative burdens depend on concerted action by stakeholders to improve trust in AI.


References

  • 1. McKinsey & Company. What is AI (artificial intelligence)? McKinsey Explainers [Internet]. 2024 April 3 2024; 2024(April 10). Available from: https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-ai.
  • 2. Mello MM, Guha N. Understanding Liability Risk from Using Health Care Artificial Intelligence Tools. N Engl J Med. 2024;390(3):271-8.
  • 3. Artificial Intelligence Risk Management Framework (AI RMF 1.0) (2023).
  • 4. Schwartz RV, Apostol; Green, Kristen; Perine, Lori; Burt, Andrew; Hall, Patrick. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. National Institute of Standards and Technology; 2022.
  • 5. Tierney AA, Gayre G, Hoberman B, Mattern B, Ballesca M, Kipnis P, et al. Ambient Artificial Intelligence Scribes to Alleviate the Burden of Clinical Documentation. NEJM Catalyst. 2024;5(3).
  • 6. Health Canada. Guidance Document: Software as a Medical Device (SaMD): Definition and Classification. In: HealthCanada, editor.: Health Canada; 2019.
  • 7. Omiye JA, Lester JC, Spichak S, Rotemberg V, Daneshjou R. Large language models propagate race-based medicine. NPJ Digital Medicine. 2023;6(1):195.
  • 8. De Rosa N. How the new version of ChatGPT generates hate and disinformation on command. CBC News. 2024.
  • 9. Weidinger L, Uesato J, Rauh M, Griffin C, Huang P-S, Mellor J, et al. Taxonomy of Risks posed by Language Models. 2022 ACM Conference on Fairness, Accountability, and Transparency 2022. p. 214-29.
  • 10. Coalition for Health AI. Blueprint for Trustworthy AI: Implementation guidance and assurance for healthcare. 2023.
  • 11. The New York Times Company v. Microscoft, OpenAI (2023).
  • 12. US Federation of State Medical Boards. Navigating the Responsible and Ethical Incorporation of Artificial Intelligence into Clinical Practice. US Federation of State Medical Boards 2024.
  • 13. The Artificial Intelligence and Data Act (Companion document). In: Innovation SaEDC, editor. Government of Canada; 2024.