LUGPA Policy Brief: Advancing AI in Healthcare

May 2024 

Artificial Intelligence (AI) is revolutionizing healthcare, enhancing diagnostic accuracy, treatment planning, and overall patient care. In urology diagnostics, AI-powered tools analyze medical imaging data, leading to early detection of urological conditions and significantly improving diagnostic precision. AI algorithms also play a crucial role in treatment planning, enabling healthcare providers to craft personalized treatment strategies.

The introduction of AI in healthcare requires a review of reimbursement policies to ensure fair compensation for AI usage. This involves creating new models that accurately consider AI integration into medical practices. In addition, standardizing training programs for AI use is crucial for maintaining care quality and equipping healthcare professionals with the necessary skills.

Congress is concentrated on ensuring a balance between fostering innovation and upholding rigorous safety standards. Regulatory changes should ensure the smooth integration of AI systems into existing healthcare infrastructure, with guidelines promoting interoperability and collaboration among AI tools.

Integrating AI involves examining patient data, medical history, and research to contribute to tailored patient care strategies, thereby enhancing operational efficiency. However, this integration requires careful attention to data security, patient privacy, and alignment with HIPAA regulations.   

Despite the rapid expansion of AI, comprehensive AI regulations in the United States lag behind those in Europe. While Congress actively debates AI regulation in healthcare, focusing on patient safety, data privacy, and equitable AI access, definitive rules are still pending.

The U.S. Food and Drug Administration (FDA) is adapting procedures to evaluate AI algorithm use in medical devices, aiming to strike a balance between innovation and safety.

In Congress, several bills have been introduced to address AI use:

  1. Eliminating Bias in Algorithmic Systems (BIAS) Act: This bill empowers and instructs the Federal Trade Commission ("FTC") to formulate and enforce regulations compelling specific individuals, partnerships, and corporations engaged in the utilization, storage, or sharing of consumers' personal information to perform impact assessments. Furthermore, these entities are mandated to "reasonably address in a timely manner" any detected biases or security issues.
  1. Healthy Technology Act of 2023: This bill allows AI or machine learning technology to prescribe drugs if authorized by state law and approved under federal medical device provisions.
  2. Artificial Intelligence and Biosecurity Risk Assessment Act: This bill requires the Assistant Secretary for Preparedness and Response to conduct risk assessments and implement strategic initiatives or activities to address threats to public health and national security due to technical advancements in artificial intelligence or other emerging technology fields.
  3. Federal Artificial Intelligence Risk Management Act of 2023: This bill mandates federal agencies to adopt the National Institute of Standards and Technology's (NIST) Artificial Intelligence Risk Management Framework. It directs NIST to develop guidelines for agencies to seamlessly integrate the framework into their AI risk management processes, encompassing standards, practices, and tools to address AI development, procurement, and utilization risks. The guidelines outline cybersecurity strategies and tools to bolster AI system security while setting minimum requirements for establishing AI usage profiles within agencies.

In April, U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME) unveiled the first congressional framework designed to deal exclusively with the extreme risks posed by future developments in advanced AI models. The framework proposed focuses on regulating frontier AI models, the most advanced AI systems yet to be developed, which possess immense computing power and are either broadly capable or intended for specific high-risk fields like bioengineering, chemical engineering, cybersecurity, or nuclear development. The aim is to establish federal oversight to mitigate extreme risks associated with these AI systems.

Oversight would involve evaluating safeguards against biological, chemical, cyber, or nuclear risks, with a tiered licensing structure determining deployment permissions. The oversight entity could be a new agency or an interagency coordinating body, possibly housed within existing departments like the Department of Energy or Department of Commerce. Developers would need to adhere to regulations throughout the development, training, and deployment phases, including incorporating safeguards against identified risks and adhering to cybersecurity standards.

On October 30, 2023, the White House issued an executive order on artificial intelligence. The 117-page order aims to promote the safe, secure, and trustworthy development and use of AI while addressing perceived risks. Directives include promoting domestic AI development, urging Congress to pass federal data privacy protections, protecting people's data privacy, accelerating the hiring of AI professionals, and providing AI training for employees.

Clear regulatory guidelines are essential for responsible AI development and deployment. Policymakers must balance AI-driven innovation and patient safety, creating adaptable frameworks. AI integration in healthcare offers opportunities and challenges for urology practices, requiring attention to regulations, reimbursement policies, and professional training. Collaboration among stakeholders is key to maximizing AI benefits while maintaining high care standards.

LUGPA recognizes AI's transformative potential and closely monitors its impact on patient care. We will continue to advocate for regulations that encourage responsible AI integration, prioritizing patient safety, ethical considerations, and continuous improvement of healthcare standards.