Beyond Ambient Scribes: Microsoft's Agentic AI Push and What It Means for Health Care IT
If you work in health care IT, you already know the ambient AI documentation story. A clinician walks into a patient encounter, the AI listens, and a draft note appears in the EHR. It is a real workflow improvement, and the technology has matured faster than most of us expected. But at HIMSS 2026 in March, Microsoft made it clear that ambient documentation is just the opening act. The company's Dragon Copilot platform is evolving from a tool that writes notes into something that reasons, acts, and makes recommendations inside clinical workflows, and that shift has significant implications for every health care IT team responsible for supporting it.
Kenneth Harper, General Manager of Dragon and DAX Copilot Product Management at Microsoft, put it plainly in a HIMSS TV segment: agentic AI will soon go beyond preparing documentation to surface important data from captured patient-doctor conversations that can enhance clinical decision-making. In a separate interview with Healthcare IT News, Harper described a vision where Dragon Copilot can serve up intelligence, guidance, and evidence-based recommendations on what the provider should consider doing on behalf of the patient.
That is a fundamentally different product category than an AI scribe. And it raises a fundamentally different set of questions for the people who have to deploy, secure, integrate, and support it.
Where Dragon Copilot Stands Today
To understand where this is headed, it helps to understand where it is right now. Microsoft launched Dragon Copilot in March 2025 as a unification of Dragon Medical One (the long-established speech recognition platform used by more than 600,000 clinicians globally, now part of the Dragon Copilot umbrella) and DAX Copilot (formerly Dragon Ambient eXperience, the ambient documentation tool), with a set of generative AI capabilities layered on top. If you have been in health care IT long enough to remember Dragon NaturallySpeaking or the early Dragon Medical dictation products, this is the latest evolution of that lineage, but the leap from dictation to ambient listening to agentic reasoning represents a fundamentally different category of product.
Adoption has been substantial. According to Microsoft's HIMSS 2026 blog post, more than 100,000 clinicians now use Dragon Copilot daily, supporting care for millions of patients every month. More than 600 health systems have adopted DAX or Dragon tools in the past 18 months. Those are Microsoft's numbers, and they should be understood as vendor-reported figures, but even discounted they represent a significant installed base.
At HIMSS 2026, Microsoft announced several expansions that move the product further toward agentic territory. Dragon Copilot now integrates with Microsoft 365 Copilot through Work IQ, allowing clinicians to pull in work-related information from connected apps and enterprise data without leaving the clinical workflow. A new partner AI app ecosystem through Microsoft Marketplace brings third-party agents into the Dragon Copilot experience, including partners like Regard (real-time comorbidity detection, deployed in 150+ hospitals), Atropos Health (evidence-based clinical decision support), and others focused on revenue cycle and prior authorization. Role-based experiences now extend to nurses and radiologists in addition to physicians. And the platform captures conversations in 58 languages with automatic conversion to the primary language used in each country.
Harper described the trajectory in practical terms: "Wherever your cursor is, Dragon Copilot is there, ready to do something on your behalf."
The Research Behind the Push
Microsoft did not make this pivot based on intuition alone. In collaboration with The Health Management Academy (THMA), Microsoft co-published a research study titled "At the Frontier: Gauging Health Care's Readiness for Agentic AI Innovation", which appeared in the January 2026 issue of NEJM AI (published online December 24, 2025).
The study surveyed 30 senior U.S. health system leaders across C-suite, clinical, IT, and analytics roles. It introduced an agentic AI maturity curve and a five-dimension readiness framework covering strategy, infrastructure, governance, workforce readiness, and adoption.
The headline numbers tell an interesting story. 43% of the leaders surveyed said they are piloting or testing agentic AI. Only 3% have deployed agents in live production workflows. One-third have no plans to explore agentic AI in the next one to two years. At the same time, optimism runs high: 77% expect improved backend productivity, 60% anticipate meaningful improvements to the provider-patient experience, and 57% foresee overall productivity gains.
When asked about investment priorities, 40% focused on near-term financial ROI and 33% on workforce efficiency. The top domains for agentic AI deployment were clinical decision support (47%), revenue cycle management (47%), clinical documentation and patient engagement (43%), and patient flow optimization (40%).
A couple of important caveats for context. This study was published as sponsored content in NEJM AI, not as peer-reviewed research. It is transparent about being a collaboration between Microsoft and THMA. And the sample size of 30 respondents means that "3% deployed" represents roughly one organization. The directional findings are useful, but these numbers should not be treated as representative of the entire health care industry.
What the research does highlight effectively is the gap between enthusiasm and execution, and the barriers that sit squarely in IT's lane.
What IT Teams Need to Be Thinking About Right Now
Here is where this gets real for health care IT practitioners. Agentic AI is not a clinician-only concern. When an AI system moves from generating a text note to autonomously surfacing clinical recommendations, querying enterprise data, and interacting with third-party services inside an EHR workflow, the infrastructure, security, integration, and compliance implications multiply.
Data Infrastructure
The THMA research found that fragmented data infrastructure is a major barrier to agentic AI readiness. That finding will not surprise anyone who has spent time trying to get clean, unified data out of a health care environment. Agentic AI systems need access to data across multiple enterprise systems simultaneously. If your organization's data exists in silos with inconsistent formats, incomplete FHIR implementations, or manual interface engines held together with duct tape and good intentions, agentic AI will struggle before it starts.
For organizations running Epic, the integration path for Dragon Copilot is relatively well-established through the longstanding DAX-Epic partnership. Oracle Health (formerly Cerner) environments and other EHR platforms may face more integration complexity. Regardless of your EHR, the foundational question is the same: can your clinical and operational data be accessed, correlated, and presented in real time? For many health care organizations, the honest answer is "not yet."
Network and Compute Considerations
Ambient AI documentation already added traffic to health care networks. Agentic AI systems that query multiple data sources, call external APIs, and process responses in real time will add more. The bandwidth requirements are not enormous on a per-session basis, but the aggregate load matters, especially at organizations where network infrastructure was sized for a different era.
If your environment is still running a flat network or has not segmented clinical IoT traffic from administrative and AI workloads, now is a good time to revisit that architecture. AI agent traffic should be identifiable and manageable within your network monitoring tools. If you are running LibreNMS, PRTG, or similar platforms, make sure you can see this traffic and set appropriate baselines before deployment, not after.
Identity and Access
This is a subtle but important point. Agentic AI systems that act on behalf of clinicians need to be treated as a distinct identity class within your identity and access management framework. These are not traditional service accounts. They are non-deterministic agents that may make different API calls in different contexts depending on the conversation they are processing.
If your organization is running Entra ID (formerly Azure AD) with Conditional Access policies, you need to think about how AI agent sessions are authenticated, what resources they can access, what actions they can take, and how those sessions are logged and auditable. The traditional approach of issuing a service account with static permissions does not map well to an agent that reasons about what data to retrieve based on a live clinical conversation.
The BAA Question: Start Here Before You Start Anywhere
Before your organization invests a single hour evaluating any AI tool that will interact with patient data, there is one question that should come first: does the vendor offer a Business Associate Agreement (BAA)?
This is not optional, and the timing matters more than most people realize. Under 45 CFR 164.502(e), a covered entity may disclose protected health information to a business associate only if it obtains satisfactory assurance that the business associate will appropriately safeguard the information. Under 45 CFR 164.504(e)(2), that assurance must be documented through a written contract (the BAA) that establishes permitted uses and disclosures, requires appropriate safeguards, mandates breach reporting, and flows down requirements to subcontractors.
The BAA must be executed before any protected health information touches the system. Not after go-live. Not during the pilot. Before. This includes trials, demos in production environments, and proof-of-concept deployments. If your organization runs a trial of an ambient AI tool and a patient conversation flows through it without a BAA in place, that is an impermissible disclosure of PHI under 45 CFR 164.502(e). Under the breach notification provisions at 45 CFR 164.402, an impermissible disclosure is presumed to be a breach unless the organization can demonstrate a low probability that the information was compromised. In practical terms, if PHI funnels through an AI tool with no BAA in place, whether intentionally or by accident, you are looking at a potential reportable breach. It does not matter that it was "just a trial" or that the vendor says their platform is secure. The contractual framework has to be in place first.
An ambient AI tool that listens to patient-clinician conversations is, by definition, creating, receiving, and processing protected health information. An agentic AI system that queries EHR data, surfaces clinical recommendations, and interacts with third-party services is doing even more of it. If the vendor will not sign a BAA, the conversation is over. It does not matter how impressive the demo was or how much the clinicians want it. Without a BAA, you cannot use it with PHI, period.
Microsoft does offer BAAs for Dragon Copilot and its Azure and M365 health care services. But as this market grows and new vendors enter the space, do not assume every product that markets itself as "HIPAA compliant" actually has the contractual framework to back that up. Ask for the BAA before you invest a single hour in evaluation. Read it carefully. Pay attention to what it covers, what it excludes, and how subcontractor relationships are handled under 45 CFR 164.504(e)(5), which requires that business associate contracts with subcontractors carry the same restrictions and conditions that apply to the business associate itself. An agentic AI platform that routes data through third-party partner agents (as Dragon Copilot's new Marketplace integrations do) makes that subcontractor chain particularly important to understand. This is not a checkbox exercise; it is the legal foundation for whether you can use the product at all.
Broader HIPAA and Regulatory Considerations
The BAA gets you in the door, but it does not cover every compliance concern that agentic AI introduces. Existing HIPAA frameworks were not designed with autonomous AI agents in mind, and the regulatory landscape is actively evolving to catch up.
The proposed HIPAA Security Rule update (published January 6, 2025, with the comment period closing March 7, 2025 after nearly 5,000 comments) remains in proposed form as of this writing, with finalization on OCR's regulatory agenda for May 2026. If finalized, the rule would require that AI software handling ePHI be listed in technology asset inventories, which is a reasonable expectation that most organizations are not currently meeting for ambient AI tools. The proposed rule would also eliminate the "Addressable" safeguard designation, making all implementation specifications Required. For readers less familiar with that distinction: under the current HIPAA Security Rule, "Addressable" does not mean optional. It means the organization must implement the specification or document why an equivalent alternative is appropriate. The proposed change would remove that flexibility entirely and mandate encryption, MFA, and annual compliance audits across the board. This is directly relevant to agentic AI deployments because it would raise the floor on what every health care organization must implement, regardless of size or resources.
On the FDA side, agentic AI systems that make autonomous clinical recommendations face a regulatory gray area. The FDA's updated Clinical Decision Support guidance (January 2026) clarifies that most agentic AI systems making autonomous clinical decisions would not qualify for the CDS exemption under the 21st Century Cures Act because they fail Criterion 4, which requires independent review by a health care professional of the basis for the AI's recommendations. This does not mean agentic clinical AI is prohibited, but it means the regulatory classification of these tools is still being sorted out, and health care organizations should understand what their vendors' regulatory posture actually is.
The Competitive Landscape Is Moving Fast
Microsoft is not the only player making agentic moves in health care, and IT teams evaluating these tools should understand the broader market.
Epic announced Agent Factory at HIMSS 2026, a platform for creating and monitoring AI agents that reason and act across clinical and operational workflows. Epic's AI architecture now includes Art (clinician-facing ambient documentation and clinical queries), Penny (revenue cycle automation), and Emmie (patient-facing MyChart interactions), with more than 85% of Epic's customers reportedly using some form of Epic AI. For organizations already running Epic, the build-versus-buy calculus is getting more complicated. Epic maintains a strategic partnership with Microsoft for Dragon integration while simultaneously building its own native AI scribe capabilities, which creates competitive tension that IT teams should be tracking.
Amazon launched Connect Health in March 2026, a suite of agentic AI tools for providers covering everything from patient identity verification to ambient documentation and medical coding. Google Cloud showcased Gemini-powered AI agent partnerships with CVS Health, Humana, Highmark, and Waystar at HIMSS 2026.
On the independent side, Abridge has emerged as the fastest-growing competitor in the ambient AI scribe space, earning Best in KLAS 2025 for Ambient AI and reaching a $5.3 billion valuation. Abridge's pricing (approximately $250/month per provider) undercuts Microsoft's pricing (approximately $400-600/month), which matters for budget-constrained organizations. Microsoft is responding with a simplified consumption pricing model launching May 1, 2026, and a Rural Health Resiliency Program offering 60% discounts for eligible rural hospitals (program details are subject to change, with eligibility criteria set by Microsoft).
The point is not to pick a winner here. The point is that this market is moving fast, the products are converging on similar agentic capabilities, and IT teams need to evaluate these tools based on integration requirements, security posture, BAA coverage, data handling practices, and actual fit for their environment, not just clinical feature demos.
What the Outcomes Data Actually Shows
It is worth being honest about what we know and do not know about the real-world impact of ambient AI in health care, because the agentic AI pitch is built on top of these results.
The vendor-reported numbers are impressive. Microsoft cites Northwestern Medicine data showing physicians who used DAX Copilot in at least 50% of encounters saw an additional 11.3 patient visits per month and a 24% decrease in time spent on notes. It is worth noting that the 11.3 figure comes from a high-utilization subset. Northwestern's formal outcomes study, using controlled analysis with a broader user group, found 5 additional appointments per provider per month and 112% ROI, which is a solid result but a meaningfully different number. Mercy Health System reported 108,000 hours saved across their system and a 13% reduction in clinician burnout. Multiple organizations have reported reductions in "pajama time," the after-hours documentation work that contributes to clinician burnout.
The independent peer-reviewed evidence tells a more nuanced story. A randomized controlled trial at UCLA published in NEJM AI in late 2025 found that DAX Copilot produced only a 1.7% decrease in time-in-note, a result that was not statistically significant. Burnout improvements were significant, but the efficiency gains that underpin the business case were modest in a controlled setting. A longitudinal study of Atrium Health's DAX deployment, also published in NEJM AI, found that high DAX users saw only about a 7% decrease in documentation hours.
This does not mean ambient AI is not valuable. Clinician satisfaction and burnout reduction are real and important outcomes. But IT leaders and administrators building budget justifications for agentic AI should ground their ROI models in the peer-reviewed data, not just vendor case studies.
Practical Steps for Health Care IT Teams
If your organization is exploring agentic AI, or if clinical leadership is pushing for it (and they probably are), here is a practical starting framework.
Confirm BAA coverage first. Before any evaluation begins, before any trial, before any PHI enters the picture, confirm the vendor will execute a BAA and review its terms. This is non-negotiable under HIPAA, and it should be the very first filter. If the vendor cannot produce a BAA, stop the evaluation and move on.
Assess your data infrastructure honestly. Agentic AI needs access to clean, integrated data. If your EHR integration is incomplete, your FHIR endpoints are not mature, or your clinical data lives in disconnected systems, those are the problems to solve before layering on AI agents.
Evaluate your network readiness. Can your network handle the additional traffic? Can you identify and monitor AI agent sessions? Is your segmentation sufficient to keep AI workloads appropriately isolated?
Get your identity house in order. AI agents acting on behalf of clinicians need appropriate identity controls, session logging, and access boundaries. Work with your identity team to define how agent identities will be managed in Entra ID (formerly Azure AD) or your identity platform of choice.
Understand the regulatory landscape. AI vendors will tell you their product is "HIPAA compliant." Your job is to verify that claim, understand the FDA classification (or lack thereof) for tools that make clinical recommendations, and ensure your organization's risk analysis accounts for AI-specific risks.
Evaluate based on your environment, not the demo. A product that integrates beautifully with Epic at a large health system may be a different proposition for a small community hospital running a different EHR with a two-person IT team. Ask vendors hard questions about deployment requirements, minimum infrastructure, and what support looks like for smaller organizations.
The Bottom Line
Agentic AI in health care is not a distant future scenario. Microsoft, Epic, Amazon, and Google are all shipping products today or in the near term that move beyond documentation into autonomous reasoning and action. The THMA research confirms what most health care IT leaders probably already sense: there is widespread interest, very little production deployment, and a lot of foundational work that needs to happen first.
The good news is that most of that foundational work, building clean data infrastructure, segmenting networks properly, implementing sound identity management, and ensuring compliance frameworks cover AI-specific risks, is work that makes your organization better regardless of whether agentic AI arrives next quarter or next year.
The practical reality is that for most health care organizations, the immediate next step is not deploying agentic AI. It is making sure the foundation is solid for when it arrives. And for the one-third of leaders who say they have no plans to explore this in the next two years: the technology is not going to wait.
Sources
- Harper, K. "Agentic AI can turn clinical notes into insights." HIMSS TV, March 2026. https://himsstv.brightcovegallery.com/
- "Microsoft's AI tool unification in Dragon Copilot takes center stage at HIMSS26." Healthcare IT News, March 5, 2026. https://www.healthcareitnews.com/news/microsofts-ai-tool-unification-dragon-copilot-takes-center-stage-himss26
- "Unify. Simplify. Scale: Microsoft Dragon Copilot meets the moment at HIMSS 2026." Microsoft Industry Blog, March 5, 2026. https://www.microsoft.com/en-us/industry/blog/healthcare/2026/03/05/unify-simplify-scale-microsoft-dragon-copilot-meets-the-moment-at-himss-2026/
- "Microsoft Dragon Copilot provides the healthcare industry's first unified voice AI assistant." Microsoft News, March 3, 2025. https://news.microsoft.com/source/2025/03/03/microsoft-dragon-copilot-provides-the-healthcare-industrys-first-unified-voice-ai-assistant-that-enables-clinicians-to-streamline-clinical-documentation-surface-information-and-automate-task/
- "At the Frontier: Gauging Health Care's Readiness for Agentic AI Innovation." NEJM AI (Sponsored Content), January 2026. https://ai.nejm.org/doi/full/10.1056/AI-S2501336
- "A year of DAX Copilot: Healthcare innovation that refocuses on the clinician-patient connection." Microsoft Blog, September 26, 2024. https://blogs.microsoft.com/blog/2024/09/26/a-year-of-dax-copilot-healthcare-innovation-that-refocuses-on-the-clinician-patient-connection/
- "Regard to Showcase Partnership with Microsoft Dragon Copilot at HIMSS 2026." PR Newswire, March 5, 2026. https://www.prnewswire.com/news-releases/regard-to-showcase-partnership-with-microsoft-dragon-copilot-at-himss-2026-302705774.html
- "Ambient AI Scribes in Clinical Practice: A Randomized Trial." NEJM AI, 2025. https://ai.nejm.org/doi/abs/10.1056/AIoa2501000
- Northwestern Medicine DAX Copilot Outcomes Study. Microsoft, 2024. https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-product-and-services/azure/documents/microsoft-northwestern-medicine-outcomes-study-final-1037321.pdf
- "Happy Docs Make Happy Patients." Mercy Health, July 23, 2025. https://www.mercy.net/newsroom/2025-07-23/happy-docs-make-happy-patients/
- "HIMSS26: Epic expands AI roadmap, previews Factory to build and orchestrate AI agents." Fierce Healthcare, March 2026. https://www.fiercehealthcare.com/ai-and-machine-learning/himss26-epic-expands-ai-roadmap-previews-factory-build-and-orchestrate-ai
- "Introducing Amazon Connect Health: Agentic AI for healthcare." AWS Blog, March 2026. https://aws.amazon.com/blogs/industries/introducing-amazon-connect-health-agentic-ai-for-healthcare-built-for-the-people-who-deliver-it/
- "Google Cloud to showcase how Gemini-powered AI agents are transforming healthcare at HIMSS26." Healthcare Finance News, March 2026. https://www.healthcarefinancenews.com/news/google-cloud-showcase-how-gemini-powered-ai-agents-are-transforming-healthcare-himss26
- 45 CFR 164.502(e) - Disclosures to business associates. https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.502
- 45 CFR 164.504(e) - Business associate contracts. https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-E/section-164.504
- 45 CFR 164.402 - Breach definition and presumption. https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-D/section-164.402
- "HIPAA Security Rule: Still on Track for Finalization." Alston & Bird, November 2025. https://www.alston.com/en/insights/publications/2025/11/hipaa-security-rule-overhaul
This article is for informational purposes only and does not constitute legal or compliance advice. Covered entities and business associates should consult qualified legal counsel or compliance professionals before making decisions pertaining to HIPAA or IT infrastructure.