freshidea - stock.adobe.com
AI technology in healthcare has shown tremendous potential to help providers improve patient outcomes and deliver on advancements to fight and cure diseases. Its ability to quickly process vast amounts of data and deliver meaningful insights within seconds based on advanced algorithms makes it the perfect tool to assist physicians. However, despite the excitement that AI has been receiving in fields like oncology and population health analysis, policymakers are not willing to let AI run wild without governance due to concerns around patient privacy and safety.
The American Medical Association (AMA) is one group that has recently acknowledged the need to draft policy around AI technology in healthcare to address concerns around patient safety. In a release, the AMA detailed its policy recommendations on augmented intelligence. The recommendations detailed the benefits AI has to offer medicine, as well as highlighted the concerns related to the design, use and implementation of AI in healthcare.
The AMA's recommendations for the use of AI technology in healthcare include:
- defining key priority areas where AI would provide the most benefits;
- taking into consideration physicians' prospectives as part of the design and implementation of AI in healthcare;
- ensuring patient safeguards and privacy when it comes to designing and deploying AI;
- promoting an understanding of what AI has to offer, but also educating patients on its limitations; and
- determining any legal implications from healthcare AI relating to its safety, effectiveness and oversight.
The AMA is hardly the first to raise concerns around the use of AI technology in healthcare in recent months. In May, the New England Journal of Medicine published an article highlighting some concerns researchers at Stanford University had about the ethical challenges associated with the use of AI in healthcare.
Today most AI tools that are considered by physicians for patient treatment require access to patients' medical records, which are often stored in different systems in the hospital. AI processes the information it accesses and, once specific algorithms are applied, the results are provided in the form of suggested treatment plans, diagnosis predictions, or other clinical insights. Permission for AI to access patient records is given by default since most AI tools are hosted within hospital systems. However, the greater concern is around patient privacy and acknowledging that AI technology in healthcare has much higher risk factors when it comes to security and data privacy.
Although the AMA's recommendations cover most of what physicians and patients would be interested in seeing in a policy around healthcare-related AI technology, security and privacy will likely be the two most pressing areas AI vendors have to address. Some of the factors contributing to the concerns around privacy are directly related to what provides AI its data and power. These factors include:
- moving patient data outside of the hospital firewalls in order to process it in the AI vendor's own data centers;
- increasing data breach risks as the result of centralizing vast amounts of health data;
- increasing the attack surface by introducing more systems that touch patient data; and
- mixing nonclinical data, such as social media and other patient-generated information, in order to link lifestyle and behaviors to illness and wellness.
Technological innovations in AI are proving to be the next big thing in healthcare. Although adoption of AI technology in healthcare is still limited to health systems like the Mayo Clinic, the results of its early use in the likes of Google, Apple and IBM have proved to advance investments in this arena. Regardless of how big the tech companies investing in AI for healthcare are, developing appropriate policies and standards will help ensure accountability, safety and security when it comes to its use in patient care, data protection and privacy.