This article is adapted from “A Primer on Law, Risk and AI in Health Care,” published in Healthcare and Life Sciences Law Committee Update (Vol. 3 no. 1, Sept. 2018), and is reproduced by permission of the International Bar Association, London, UK. © International Bar Association.
An era of unprecedented technological innovation is transforming the healthcare sector. Perhaps to a greater extent than electronic medical records or virtual care, artificial intelligence (AI) is expected to significantly impact how patients receive care and how patients are treated. AI has already been shown to accurately diagnose cancers in certain conditions, predict the progression of chronic diseases, and develop treatment plans.1 As the potential of AI evolves, it is important to consider the medico-legal risks associated with its use – especially when the regulatory framework for AI continues to be developed.1
What is artificial intelligence?
AI can be broadly defined as the capacity of a machine or computer to mimic intelligent human thought processes and learn new information.2 “Machine learning” allows computers to gain experience without being programmed to do so. Common applications of machine learning include image and speech recognition. These technologies are adaptive and can continue to learn and evolve as they receive new information, including in response to use in real world settings.2
Opportunities and challenges
The rapid evolution of AI technologies is expected to improve healthcare and change the way it is delivered.3 For example, AI is being explored, along with other tools, as a means of increasing diagnostic accuracy, improving treatment planning, and forecasting outcomes of care.4 AI has shown particular promise for clinical application in image-intensive fields, including radiology, pathology, ophthalmology, dermatology, and image-guided surgery,3 as well as broader public health purposes, such as disease surveillance.2 A number of AI applications have already been approved by Health Canada and international regulators.5
Despite the attention AI is receiving, high quality evidence about its effectiveness and reliability continues to be limited.6 Other challenges with AI include its inability to explain its reasoning processes, otherwise known as the “black box” effect.7 The dataset used by some AI technologies to “learn” also has the potential to introduce bias. For example, a dataset that unintentionally excludes patients with certain backgrounds, conditions, or characteristics may not be reliable for those segments of the population.7 Concerns can also arise with respect to the integration of AI within medicine without over-reliance (where healthcare providers routinely defer to AI outputs) or under-reliance (where relevant information from AI tools is dismissed.)2
The rapid pace of development of AI technologies can also create medico-legal risk as functional capacity and possible uses change, particularly in the healthcare space. A robust regulatory framework relating to patient safety and ensuring AI technologies are high quality remains a work in progress, and contributes to uncertainty in assessing liability risks.1
A measured approach to AI
Regulatory guidance for practitioners on the use of AI technologies in healthcare is still scarce.1 In this environment, physicians considering the adoption of AI in their practice should continue to have regard to the best interests of the patient.
Some medical regulatory authorities (Colleges) and professional associations and federations have issued interim guidelines. For example, the College of Physicians and Surgeons of British Columbia has suggested physicians apply a grading system to assess the quality of applications (or apps) that incorporate AI.8 The College suggests using the App Evaluation Model developed by the American Psychiatric Association,9 a five-step assessment of an app’s business model: advertising conflicts of interest, privacy and security, the evidence base that informs the algorithm, ease of use, and interoperability.
The Canadian Medical Association’s Guiding Principles for Physicians Recommending Mobile Health Applications to Patients10 may also be a helpful resource. The goal of using an AI-based technology should be to enhance patient care and complement the physician-patient relationship. Physicians using AI need to be mindful of their legal and medical professional obligations, and discuss the appropriateness of using AI technology and privacy risks with the patient. The CMA also suggests considering whether there is evidence of an app’s safety and effectiveness, and whether it is endorsed by a professional organization, is easy to use, and demonstrates a high standard of security.
Before relying upon AI technology, it is helpful to know whether it has received an endorsement from a reputable professional or regulatory organization. You should review and seek advice from colleagues, professional associations and federations, regulatory authorities, and/or administrators at your facility, as appropriate, regarding its suitability in clinical practice. It will be important to consider its terms of use, reliability, and privacy, as well as the following questions:
- Are there appropriate safeguards for the collection, transfer, and use of patient data?
- Are appropriate procedures followed and is consent obtained if the AI developer plans to store or use patient data to adapt the AI tool?
- Are measures in place to maintain high standards for patient safety and reliability, including if the AI tool is adaptive and will change in response to new patient data? These standards should be maintained throughout the AI tool’s lifecycle.
- What is the stated purpose and objective of the AI technology and is its use appropriate in the circumstances of your practice? This includes a clear understanding of the intended patient populations and the clinical use conditions.
This review will be especially important given the increased medico-legal risks when considering AI tools that have not been designed specifically for healthcare delivery.
Regulatory approval can also help mitigate risks associated with the use of AI by helping to establish the safety, effectiveness, and quality of an AI technology. Health Canada has undertaken efforts in recent years to regulate AI by licensing software as a medical device (SaMD).11 Health Canada has adopted a risk-based approach, so that software meant to monitor, assess, or diagnose a condition that could result in immediate danger is required to meet more stringent licensing and monitoring requirements. Health Canada has also announced plans to regulate “adaptive” SaMD that continues to “learn” after being licensed by using more flexible regulatory requirements tailored to the specific product, including post-market surveillance.12
Complementing clinical judgment
Before deciding to use an AI-based technology in your medical practice, it is important to evaluate any findings, recommendations, or diagnoses suggested by the tool. Most AI applications are designed to be clinical aids used by clinicians as appropriate to complement other relevant and reliable clinical information and tools. Medical care provided to the patient should continue to reflect your own recommendations based on objective evidence and sound medical judgment.
The bottom line
- AI technologies are currently intended to complement clinical care. Medical care provided to the patient should reflect your own recommendations based on objective evidence and sound medical judgment.
- Critically review and assess whether the AI tool is suited for its intended use and the nature of your practice.
- Consider the measures in place to ensure the AI tool’s continued effectiveness and reliability.
- Continue to be mindful of your legal and medical professional obligations, including privacy and confidentiality, and how patient data will be transferred, stored, and used, and whether reasonable safeguards are in place to protect patient data. The applicable policies or guidelines of the appropriate College or health institution should also be considered.
- Be aware of bias and seek to mitigate it when possible by pursuing alternate sources of information and consulting colleagues.
References
- 1. Crolla D, Lapner M. A primer on law, risk and AI in healthcare. Healthcare and Life Sciences Law Committee Update. 2018 Sept;3(1).
- 2. Canada’s Drug and Health Technology Agency. An Overview of Clinical Applications of Artificial Intelligence (Nov. 2022).
- 3. Naylor D. On the Prospects for a (Deep) Learning Health Care System. JAMA. 2018;320(11):1099–1100. See also: Government of Canada, Standing Senate Committee on Social Affairs, Science and Technology. Challenge Ahead: Integrating Robotics, Artificial Intelligence and 3D Printing Technologies into Canada’s Healthcare Systems. 2017 Oct.
- 4. Macrae C. Governing the safety of artificial intelligence in healthcare. BMJ Qual Saf. 2019;28(6):495–498.
- 5. Health Canada. Medical Devices Active Licence Listing (MDALL). See also, for example, U.S. Food & Drug Administration. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices.
- 6. Antoniou T., M. Muhammad. Evaluation of machine learning solutions in medicine. CMAJ September 13, 2021 193 (36) E1425-E1429.
- 7. Challen R, Denny J, Pitt M,.et al. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019;28:231–237.
- 8. College of Physicians and Surgeons of British Columbia. Prescribing apps – the challenge of choice. College Connector. 2018 Nov/Dec;6(6).
- 9. American Psychiatric Association. App Evaluation Model. [Accessed 2019 May].
- 10. Canadian Medical Association. Guiding Principles for Physicians Recommending Mobile Health Applications to Patients. CMA Policy, 2015.
- 11. Health Canada. Guidance Document: Software as a Medical Device (SaMD): Definition and Classification.
- 12. Health Canada. Regulating Advanced Therapeutic Products (December 2022).