Considerations for adopting and implementing artificial intelligence (AI) in healthcare
How can AI change the way you practice medicine
Published: October 2024
6 minutes
Introduction
Artificial intelligence (AI) has emerged as a powerful tool that can potentially transform medicine. In simple terms, AI is an algorithm; it takes data, processes the data and provides an output. AI is expected to significantly impact how patients receive care and how patients are treated.
Consideration of these different uses is helpful to understand regulatory requirements, impacts, risks, and whether AI works for you and your practice.
AI encompasses a range of technologies with diverse applications that attempt to mimic human thought processes and learn new information.
Predictive algorithms, designed for specific tasks, may provide for example:
- early warning scores based on patient vitals
- identify potential abnormalities on a CT scan
Generative AI uses input to generate a new “product”. Examples may include:
- recording an interaction to generate a SOAP note
- supporting physician education
Large Language Models (LLMs), a recent breakthrough in generative AI, excels at language-related tasks. LLMs can handle a variety of inputs beyond text including audio, videos and data files. LLMs can:
- understand context
- answer questions
- summarize texts
- create content
Good practice guidance
AI systems can be deployed to achieve various objectives in many settings and their use may be used to streamline tasks, which might allow physicians to reallocate their resources to other tasks.
Generative AI is being integrated into many other tools, such as EMRs. Understanding the context in which you intend to use AI will help inform the medico-legal risks associated with its use.
AI tools may be used for any of the following reasons:
- clinical medical purposes
- administrative operational uses
- knowledge translation
- research and development
Collapse section
When procuring an AI tool or service consider the following:
- how are we intending to use the tool
- is the tool useful and practical
- how will it fit into clinical practice
- is there training available
- will it integrate with other existing technology currently being used
- what are the liability risks
- will it improve patient care
Collapse section
The establishment of comprehensive regulatory frameworks, aimed at safeguarding patient safety and privacy, continues to evolve.
Federal legislation, the AI and Data Act, has been tabled which, if passed, will introduce a regulatory framework for AI tools. Health Canada has undertaken efforts in recent years to regulate AI by licensing software as a medical device (SaMD). However, some products are not required to be licensed, including those that serve only an administrative purpose.
Some Colleges have issued preliminary guidance urging consideration of accountability, privacy, transparency, and accuracy. Check your College’s policies on the use of AI in your practice.
Collapse section
Physicians are encouraged to verify privacy compliance. This should include consideration of privacy safeguards and whether the vendor will use patient data to train the algorithm. Carefully review the terms of use and privacy policies and ensure that changes cannot be made without client consent and/or advance notification.
Collapse section
AI can exhibit bias due to the data it is trained on, the algorithm used and the human decisions that were made during its development. If the training data used contains biases, such as those based on race, gender, or socioeconomic status, the AI can learn and perpetuate those biases. It is important to determine if the tool is appropriate for your patient population and was trained on representative data.
Collapse section
Regulatory approval can help mitigate risks associated with AI by establishing its safety and effectiveness.
Given some products may learn and change over time, guidelines and measures should be in place for ongoing monitoring.
Collapse section
Colleges expect that informed consent from patients is obtained. This includes communication of risks and benefits of the use of the technology, the issue of potential bias, and privacy risks.
If the data may be de-identified and used to improve the algorithm by learning from one patient to the next, this should be explained to the patient and documented as part of the consent discussion.
Collapse section
In clinical decision-making, for the foreseeable future AI will be an aid for clinicians to support and complement other relevant and reliable information and tools. From a risk management perspective, it is still important to apply sound clinical judgment, even when automated decision support is available.
The pace of change presents challenges for regulators, AI developers, and healthcare providers. It contributes to medico-legal risks, and so a cautious and measured approach for the adoption of AI is required.
If you are unsure about the use of AI in your practice, members are encouraged to contact CMPA to obtain case-specific medico-legal advice.
Collapse section
Checklist: Considerations for adopting and implementing artificial intelligence (AI) in healthcare
Acknowledging the risks and benefits of AI in healthcare is essential
Have you:
- Identified how you intend to use AI in your clinical practice? Consulted your College’s policy or considered any regulatory issues for its use?
- Consulted with your hospital or facility administration, if applicable?
- Reviewed the stated purpose and objective of the AI technology?
- Confirmed that the tool is appropriate for your practice?
- Considered whether the tool will impact patient care?
Collapse section
Have you:
- Reviewed the contractual terms and privacy policy of the vendor?
- Verified there are appropriate privacy safeguards, including contractual obligations with the vendor?
- Verified where the data goes?
- Asked whether the data will be retained?
- Verified that the product is compliant with privacy legislation?
- Checked to see if a professional organization or health agency has endorsed the product?
- Reviewed the terms of use?
- Obtained patient content?
- Reviewed privacy terms with your personal/business counsel?
Collapse section
Have you:
- Checked if the tool is appropriate for your patient population?
- Verified that the vendor provided necessary information, including:
- the product’s intended use
- performance
- limitations
- Verified that the training data is representative) of your patient population?
Collapse section
Have you:
- Verified the efficacy and safety of the tool and consider what level of evidence has been provided by the developer/vendor?
- Checked if other organizations endorse the tool?
- Confirmed the measures in place for oversight, including those for regular updates and maintenance by the developer/vendor?
- Established human verification and validation in your process?
Collapse section
Have you:
- Obtained expressed consent from the patient (or substitute decision-maker)?
- Documented the consent discussion in the medical record?
Collapse section
Additional Resources
Additional reading
CanMEDS: Communicator,
Collaborator,
Health Advocate,
Professional
DISCLAIMER: This content is for general informational purposes and is not intended to provide specific professional medical or legal advice, nor to constitute a "standard of care" for Canadian healthcare professionals. Your use of CMPA learning resources is subject to the foregoing as well as CMPA's Terms of Use.