

Microsoft Corp. today unveiled a major expansion of its artificial intelligence security and governance offerings with the introduction of new capabilities designed to secure the emerging “agentic workforce,” a world where AI agents and humans collaborate and work together.
Announced at the company’s annual Build developer conference, Microsoft is expanding Entra, Defender and Purview, embedding them directly into Azure AI Foundry and Copilot Studio to help organizations secure AI apps and agents across the entire development lifecycle.
The expanded capabilities collectively seek to address a growing issue in AI development — securing systems from prompt injection, data leakage and identity sprawl, all while also ensuring compliance with various regulations.
Leading the list of announcements is the launch of Entra Agent ID, a new centralized solution designed to manage the identities of AI agents built in Copilot Studio and Azure AI Foundry. Each agent is automatically assigned a secure, trackable identity in Microsoft Entra, giving security teams visibility and governance over nonhuman actors in the enterprise.
The integration includes support for third-party platforms, with Microsoft announcing new partnerships with ServiceNow Inc. and Workday Inc. to support identity provisioning across human resource and workforce systems. With Entra Agent ID, security teams can now unify oversight of AI agents and human users within a single administrative interface, laying the groundwork for broader nonhuman identity governance across the enterprise.
Also announced today is a feature that sees security insights from Microsoft Defender for Cloud are now integrated directly into Azure AI Foundry to give developers AI-specific threat alerts and posture recommendations without leaving their environment. The alerts cover more than 15 detection types, such as jailbreaks, misconfigurations and sensitive data leakage. The idea here is that by removing friction between development and security teams, the integration enables faster response to evolving threats without slowing down deployment.
Purview, Microsoft’s integrated data security, compliance and governance platform, is getting a new software development kit that allows developers to embed policy enforcement, auditing and data loss prevention into AI systems. The SDK allows organizations to identify sensitive data risks in real time, apply auto-labeling to Dataverse tables and inherit sensitivity classifications across AI agent outputs to ensure consistent protection from development through production.
Given that the security announcements today focus on AI security, not surprisingly, Microsoft’s Azure AI Foundry is gaining updates, including one called “Spotlighting” that can detect prompt injection attacks embedded in external content, real-time task adherence evaluation and continuous monitoring dashboards. The feature allows developers to confirm agent behavior remains within scope and aligned with enterprise policy.
Azure AI Foundry now also supports compliance workflows through integration with Microsoft Purview Compliance Manager and third-party governance solutions Credo.AI Inc. and Saidot Ltd. With the update, developers can now run algorithmic impact assessments, generate reports and surface risk evidence for security and compliance teams, all from within the Azure AI environment.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.