By: Scott Carroll, Kennedy Farr, and Jennifer Forward
Kennedy A. Farr
Jennifer Forward
Scott V. Carroll
Agentic artificial intelligence (“agentic AI”) refers to systems that can independently plan and execute multi-step tasks without continuous human direction. Today, these systems can analyze charts, labs, imaging, and medication lists, identify concerning trends, and even draft suggested care plans on their own.
This autonomy distinguishes agentic AI from traditional “generative AI,” such as ChatGPT, Microsoft Copilot, or Google Gemini. Generative AI cannot initiate tasks because it waits for human prompts and cannot interact with operational systems to schedule tasks. Once a conversation with a human ends, generative AI does not retain goals or continue working toward them. Agentic AI, by contrast, maintains objectives over time, continuously monitors new information, and adapts its actions to achieve its programmed goals.
While agentic AI promises potential relief from workforce shortages and could automate routine clinical tasks, it also comes with clinical, security, and ethical risks.
Clinically, agentic AI errors in diagnosis or treatment recommendations and orders could lead to patient harm. Agentic AI learns from human-provided data, and biased data can perpetuate health inequities. Further, since such systems by their nature operate autonomously, a single mistake can trigger a chain of incorrect actions that may harm a patient.
Security risks arise because agentic AI requires broad access to sensitive patient information. Weak security could expose data to breaches, and malicious actors could potentially hack an agentic system, allowing it to take harmful actions.
Accountability becomes unclear when an agentic AI system makes a mistake. Responsibility will likely fall on the clinician who used the tool, the facility that deployed it, and the developer who built it. Such a complicated and evolving legal and risk management landscape creates liability concerns. Indeed, MLMIC has issued several publications addressing risk management and the use of AI in clinical settings.
Practitioners should recognize that using agentic AI creates a professional obligation to understand the tools well enough to ensure satisfaction of the medical standard of care and to uphold the duty to do no harm. Errors made by agentic AI can breach these duties, especially when practitioners lack proper knowledge of how to use the system. Use of experimental or novel tools on patients also implicates concerns related to patient disclosure and consent and human subject safety considerations (and potential requirements for Institutional Review Board approvals). Finally, there is also the risk that increased reliance on automated systems could erode the human empathy central to patient care, as agentic AI cannot understand or express compassion.
Agentic AI is moving from pilots to clinical use, but guidance remains varying. Leading health coalitions, including the Coalition for Health AI and the Trustworthy and Responsible AI Network, have advanced methods to assess safety and performance. Additionally, there are some growing sets of guidelines to use when evaluating and validating this new technology. For example, on September 10, 2025, the Consumer Technology Association (CTA), North America’s largest technology trade association, released a new standard for validating AI tools that predict health outcomes. This fifth CTA AI standard offers a structured scheme for testing predictive algorithms in controlled and real-world settings. It emphasizes transparency about training data, encourages developers to ensure models can explain how they arrive at specific predictions, and calls for robust post deployment plans to monitor quality and recalibrate when performance drifts.
The sector has not united around one approach, leaving potential users to navigate a patchwork of frameworks. We expect over time that there will be a consolidated set of standards and guidelines for development and clinical use that developers, hospitals, systems, and clinicians can refer to when implementing AI and machine learning tools, including agentic AI.
Regarding regulation in New York State, there are not yet agentic-AI-specific rules for clinical care, but New York has established guidelines that will shape deployment in health settings. In 2023, the Governor issued an executive order establishing an AI policy and governance framework and directing ethical-use policies for state agencies, followed in 2024–2025 by guidance from the Office of Information Technology Services on responsible use of generative AI.
State medical boards retain their principal role of regulating the practice of medicine and have likewise begun articulating principals for the use of AI in medical practice, emphasizing that agentic systems cannot independently practice medicine, licensed clinicians remain ultimately responsible for diagnosis and treatment decisions assisted by AI, and development should be transparent, documented, and consistent with the standard of care, patient safety, and existing scope-of practice supervisions requirements. Indeed, over the last year, the New York State Board of Medicine has engaged in discussions with technical, legal, and regulatory representatives regarding this topic. However, no formal guidance or advisories have yet been issued by the Board. Additionally, professional specialty boards may develop their own specialty specific guidance for using agentic AI. Providers should closely monitor guidance from the Board of Medicine and their professional societies.
For hospitals and clinicians, professional standards operate alongside Department of Health requirements for the operation of hospitals and clinics, such as quality assurance, credentialing, and risk management, that apply when agentic AI influences diagnosis or treatment. Additionally, regulatory issues arise with agentic AI. Agentic AI systems may require FDA oversight, warranting premarket review and ongoing controls, because when, to a reasonable person, it provides medical treatment or clinical decision support that influences care, it could be considered a medical device warranting oversight. Users of AI and machine learning applications, including agentic AI, should understand the level of oversight by the FDA of their specific application and its current status.
The regulatory and industry guidance discussed in this article do not resolve the legal issues but provide early guidance to practitioners and the industry as agentic AI enters clinical practice. Physicians and medical groups considering any AI tool need to evaluate these tools as they would any new medical device or drug, ensuring they understand the technology, its intended use, and built-in safeguards, and take additional risk management steps that are appropriate based on the nature of the tool. If you have questions about the topics discussed in this article, please contact Lippes Mathias health law team members Scott V. Carroll (scarroll@lippes.com), Kennedy A. Farr (kfarr@lippes.com), or Jennifer Forward (jforward@lippes.com).
![]()



