Digital Health
Vol. 26 No 4 | Summer 2024
Feature
Wearable AI
Dr Helena Qian
BMed. DipLANG. CHIA. AICGG. ARANZCOG (Cert)

Smallpox vaccine. Penicillin. Ether anaesthesia. Insulin. Sonography.

These groundbreaking discoveries not only revolutionised the entire healthcare landscape but fundamentally transformed the very fabric of medical practice and patient care. Once again, our profession is on the cusp of yet another frontier, poised to navigate the exciting but unfamiliar territory of artificial intelligence (AI) in healthcare, heralding new ethical dilemmas and paradigm shifts.

AI is defined by the Australian Health Practitioner Regulation Agency (AHPRA) as any software or system ‘able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision making and translation between languages.’ From its inception in the mid-20th century, when early computational algorithms began to assist in medical diagnoses, AI has evolved dramatically. Arne Larsson received the first pacemaker in 1958, but it took decades before there was widespread adoption of implantable cardiac devices. Conversely, in this era of globalisation, increased popularity of telehealth since the COVID-19 pandemic, rapid advancements in computing power, maturation of data analytics, and the formidable capabilities of large language models based on transformer architectures (i.e. Chat GPT-3), interest and adoption of AI in healthcare has accelerated astronomically.

With growing concerns about an overstretched and unsustainable healthcare system, AI is an attractive solution with estimated net savings of up to $360 billion, found by McKinsey and Harvard researchers. The Productivity Commission also deduced that up to 30% of healthcare tasks could be automated using AI, allowing clinicians to have more time to spend on direct patient care. Moreover, improved cost effectiveness and reduced travel time secondary to uptake of telehealth, digital therapeutics and remote patient monitoring facilitates consumer gains of approximately $895 million annually. As such, the Australian government invested $2 billion into My Health Record and has pledged to invest almost $30 million into researching how AI can enhance access to health services and drive innovation.

In many instances however, AI health tools are already being utilised in real time without adequate validation, peer-reviewed guidelines, or regulatory oversight. Amidst this enthusiasm, have we sufficiently explored potential unintended consequences? Are health practitioners equipped with the resources and healthy scepticism needed to discern which technologies have clinical potential and which have more risks than benefit? Is our current integration of AI technology in healthcare outpacing regulatory bodies and broaching the feared technological singularity?

Professor Karin Verspoor, Executive Dean of the School of Computing Technologies at RMIT University, co-founder of the Australian Alliance for Artificial Intelligence in Healthcare and Fellow of the Australian Academy of Technological Sciences and Engineering (ATSE) cautions that “health decisions should be augmented but not replaced by AI”. Evidence of efficacy is lacking whereby of the 84 randomised clinical trial studies of AI published between 2018-2023, none were conducted in Australia – 37% were in the EU, 31% in the US and 29% in China. AI also reflects inherent biases in collated data and adoption of biased models can compound existing healthcare inequalities. Given that health decisions are multi-faceted and dependent on each patient’s unique clinical and psychosocial contexts, it is crucial to recognise that “AI model generalisability is not a given; AI model localisation is essential.” It is imperative to consider the context and ethical implications of integrating AI into healthcare, to harness its benefits while simultaneously safeguarding public trust and patient rights.

Wearable AI devices especially, have become more mainstream and play a vital role in empowering both patients and clinicians with the added capability to collate large quantities of data for quality assurance. These devices encompass a variety of smart technologies that monitor physiological signals and include continuous electronic fetal monitoring, automated closed loop insulin systems, and smart sensors for tracking reproductive health metrics which have translated into timely interventions and improved patient outcomes.

Evaluating Safety and Efficacy

To help with comprehensively evaluating the safety and clinical efficacy of AI tools, the following steps are recommended:

  1. Clinical Evidence Review: Assess the clinical studies supporting the AI tool’s claims, focusing on trial size, population diversity, conflicts of interest and control measures employed.
  2. Regulatory Approval: Contact the vendor or verify if the device/tool has been registered with the Australian Register of Therapeutic Goods (ARTG).
  3. User Feedback and Technical Efficacy: Routinely gather insights from peers and evaluate the tool’s real-world practical application.
  4. Integration with Clinical Workflow: Assess how seamlessly the tool integrates into existing workflows.
  5. Quality Assurance: Implement comprehensive acceptance testing and ongoing periodic quality control procedures to identify and address issues proactively.
  6. End-User Training: Provide comprehensive training on the tool’s intended use and limitations, including a trial period with local patients to identify biases.
  7. Ongoing Monitoring and Re-Validation: Establish a structured process for ongoing monitoring of AI performance with local test datasets, including routine evaluations of the tool’s accuracy and reliability. Periodic re-validation should be conducted whenever changes in workflow, technology or patient demographics occur

Guiding Principles for AI in Clinical Practice

AHPRA and National Boards recommend the following core principles to be considered when utilising AI in clinical practice:

  • Transparency: Health practitioners must be open about how AI technologies function and influence clinical decisions. The depth of information provided should align with the context of AI use i.e. more detail is required when AI impacts personal data directly.
  • Accountability: Clear accountability for AI-generated outcomes is essential, maintaining health practitioners’ central role in decision-making. For instance, if using AI scribing tools, the health practitioner is responsible for reviewing the accuracy and relevance of the generated records, regardless of Therapeutic Goods Administration approval.
  • Patient-Centricity: Any AI application should prioritise patient needs, enhancing overall care.
  • Understanding: Health practitioners should cultivate a strong understanding of the AI tool(s) including their intended use, limitations and training methodologies to ensure safe and relevant application. Research indicates that while experienced health practitioners often trust their expertise over AI systems, they are susceptible to automation bias. This can culminate in reliance on AI guidance without adequate verification, further underscoring the need for enhanced AI literacy to preserve patient safety.
  • Informed Consent: It is essential for health practitioners to involve patients in decisions regarding AI tools that require personal data input, ensuring informed consent is obtained and documented.
  • Ethical and Legal Issues: Health practitioners must adhere to professional obligations outlined in their board’s code of conduct, specifically ensuring that data collection, storage, use, and disclosure comply with legal requirements and that patient privacy is preserved. Additionally, health practitioners must be aware of whether patient data is also being used to train AI models (the current focus of a data breach probe into one of Australia’s largest medical imaging providers), understand potential biases in AI algorithms, and apply AI only when appropriate, particularly concerning the health and safety of Aboriginal and Torres Strait Islander peoples and other culturally diverse populations. It is also recommended to hold adequate professional indemnity insurance to cover the use of AI tools in practice.

Effectively harnessing AI in healthcare offers a cost-effective opportunity to synthesise real-time data, enable customised decision-making, and improve patient care efficiency. While the benefits are substantial, risks related to privacy, clinical efficacy, and regulation remain. By proactively addressing these challenges and enhancing digital health literacy for both patients and practitioners, we can significantly improve the quality, accessibility and effectiveness of Australia’s healthcare system whilst maintaining public trust, minimising harm and strengthening our clinical capabilities through cautious adoption of innovative technologies.


Leave a Reply

Your email address will not be published. Required fields are marked *