Secure your AI applications from code to runtime with Microsoft Defender for Cloud

Microsoft Defender for Cloud becomes the first CNAPP to protect enterprise-built AI applications across the application lifecycle

The transformation has accelerated with the introduction of generative (GenAI), unlocking a wide range of innovations with intelligent applications. Organizations are choosing to develop new GenAI applications and embed AI into existing applications to increase business efficiency and productivity.

Attackers are increasingly looking to exploit applications to alter the designed purpose of the AI model with new attacks like prompt injections, wallet attacks, model theft, and data poisoning, while increasing susceptibility to known risks such as data breaches and denial of service. Security teams need to be prepared and ensure they have the proper security controls for their AI applications and detections that address the new threat landscape.

As a market-leading cloud-native application protection platform (CNAPP), Microsoft for Cloud helps organizations secure their hybrid and multicloud environments from code-to-cloud. We are excited to announce the preview of new security posture and threat protection capabilities to enable organizations to protect their enterprise-built GenAI applications throughout the entire application lifecycle.

With the new security capabilities to protect AI applications, security teams can now:

  • Continuously discover GenAI application components and AI-artifacts from code to cloud.
  • Explore and remediate risks to GenAI applications with built-in recommendations to strengthen security posture
  • Identify and remediate toxic combinations in GenAI applications using attack path analysis
  • Detect on GenAI applications powered by Azure AI Content Safety prompt shields, Microsoft signals, and contextual activity monitoring
  • Hunt and investigate attacks in GenAI apps with built-in integration with Microsoft Defender

Start secure with AI security posture management

With 98% of organizations using public cloud embracing a multicloud strategy[1], many of our customers use Microsoft Defender Cloud Security Posture Management (CSPM) in Defender for Cloud to get visibility across their multicloud environments and address cloud sprawl. With the complexities of AI workloads and its configurations across models, SDKs, and connected datastores – visibility into their inventory and the risks associated with them is more important than ever. 

To enable customers to gain a better understanding of their deployed AI applications and get ahead of potential threats – we're announcing the public preview of AI security posture management (AI-SPM) as part of Defender CSPM.

Defender CSPM can automatically and continuously discover deployed AI workloads with agentless and granular visibility into presence and configurations of AI models, SDKs, and technologies used across AI services such as Azure OpenAI Service, Azure , and Amazon Bedrock.

The new AI posture capabilities in Defender CSPM discover GenAI artifacts by scanning code repositories for Infrastructure-as-Code (IaC) misconfigurations and scanning container images for vulnerabilities. With this, security teams have full visibility of their AI stack from code to cloud and can detect and fix vulnerabilities and misconfigurations before deployment. In the example below, the cloud security explorer can be used to discover several running containers across clouds using LangChain libraries with known vulnerabilities.

Using the cloud security explorer in Defender for Cloud to discover container images with CVEs on their AI-libraries that are already deployed in containers in Azure, AWS and GCP.Using the cloud security explorer in Defender for Cloud to discover container images with CVEs on their AI-libraries that are already deployed in in Azure, AWS and GCP.

By mapping out AI workloads and synthesizing security insights such as identity, data security, and internet exposure, Defender CSPM continuously surfaces contextualized security issues and suggests risk-based security recommendations tailored to prioritize critical gaps across your AI workloads. Relevant security recommendations also appear within the Azure OpenAI resource itself in Azure portal, providing developers or workload owners direct access to recommendations and helping remediate faster.

Recommendations and alerts surfaced directly in the resource page of Azure OpenAI in the Azure portal, aiming to meet business users and resource owners directly.Recommendations and alerts surfaced directly in the resource page of Azure OpenAI in the Azure portal, aiming to meet business users and resource owners directly.

Grounding and fine tuning are top of mind for organizations to infuse their GenAI with the relevant business context. Our attack path analysis capability can identify sophisticated risks to AI workloads including data security scenarios where grounding or fine-tuning data is exposed to the internet through lateral movement and is susceptible to data poisoning.

This attack path has identified that a VM with vulnerabilities has access to a data store that was tagged as a grounding resource for GenAI applications. This opens the data store to risks such as data poisoning.This attack path has identified that a VM with vulnerabilities has access to a data store that was tagged as a grounding resource for GenAI applications. This opens the data store to risks such as data poisoning.

A common oversight around grounding happens when the GenAI model is grounded with sensitive data and could pose an opening to sensitive data leaks. It is important to follow architecture and configuration best practices to avoid unnecessary risks such as unauthorized or excessive data access. Our attack paths will find sensitive data stores that are linked to AI resources and extend wide privileges. This will allow security teams to focus their attention on the top recommendations and remediations to mitigate this.

This attack path has captured that the GenAI application is grounded with sensitive data and is internet exposed, making the data susceptible to leakage if proper guardrails are not in place.This attack path has captured that the GenAI application is grounded with sensitive data and is internet exposed, making the data susceptible to leakage if proper guardrails are not in place.

Furthermore, attack path analysis in Defender CSPM can discover risks for multicloud scenarios, such as an AWS workload using an Amazon Bedrock model, and cross-cloud, mixed stacks that are typical architectures where the data and compute resources are in GCP or AWS and leverage Azure OpenAI model deployments.

An attack path surfacing vulnerabilities in an Azure VM that has access to an Amazon account with an active Bedrock service. These kinds of attack paths are easy to miss given their hybrid cloud nature.An attack path surfacing vulnerabilities in an Azure that has access to an Amazon account with an active Bedrock service. These kinds of attack paths are easy to miss given their hybrid cloud nature.

Stay secure in runtime with threat protection for AI workloads

With organizations racing to embed AI as part of their enterprise-built applications, security teams need to be prepared with tailored threat protection to emerging threats to AI workloads. The potential attack techniques targeting AI applications do not revolve around the AI model alone, but rather the entire application as well as the training and grounding data it can leverage.

To complement our posture capabilities, today we are thrilled to announce the limited public preview of threat protection for AI workloads in Microsoft Defender for Cloud. The new threat protection offering leverages a native integration Azure OpenAI Service, Azure AI Content Safety prompt shields and Microsoft to deliver contextual and actionable security alerts. Threat protection for AI workloads allows security teams to monitor their Azure OpenAI powered applications in runtime for malicious activity associated with direct and in-direct prompt injection attacks, sensitive data leaks and data poisoning, as well as wallet abuse or denial of service attacks.

GenAI applications are commonly grounded with organizational data, if sensitive data is held in the same data store, it can accidentally be shared or solicited via the application. In the alert below we can see an attempt to exfiltrate sensitive data using direct prompt injection on an Azure OpenAI model deployment. By leveraging the evidence provided, SOC teams can investigate the alert, assess the impact, and take precautionary steps to limit users access to the application or remove the sensitive data from the grounding data source.

The sensitive data that was passed in the response was detected and surfaced as an alert in the Defender for Cloud.The sensitive data that was passed in the response was detected and surfaced as an alert in the Defender for Cloud.

 Defender for Cloud has built-in integrations into Microsoft Defender XDR, so security teams can view the new security alerts related to AI workloads using Defender XDR portal. This gives more context to those alerts and allows correlations across cloud resources, devices, and identities alerts. Security teams can also use Defender XDR to understand the attack story, and related malicious activities associated with their AI applications, by exploring correlations of alerts and incidents.

An incident in Microsoft Defender XDR detailing 3 separate Defender for Cloud alerts originating from the same IP targeting the Azure OpenAI resource – sensitive data leak, credential theft and jailbreak detections.An incident in Microsoft Defender XDR detailing 3 separate Defender for Cloud alerts originating from the same IP targeting the Azure OpenAI resource – sensitive data leak, credential theft and jailbreak detections.

 Learn more about securing AI applications with Defender for Cloud

  • Get started with AI security posture management in Defender CSPM
  • Get started with threat protection for AI workloads in Defender for Cloud
  • Get access to threat protection for AI workloads in Defender for Cloud in preview
  • Read more about securing your AI transformation with Microsoft Security
  • Learn about Defender for Cloud pricing

Additional resources

Ron Matchoro, Principal Group Product Manager, Microsoft Defender for Cloud

Shiran Horev, Principal Product Manager, Microsoft Defender for Cloud

[1] 451 Research, Multicloud in the Mainstream, 2023

 

This article was originally published by Microsoft's Defender for Cloud Blog. You can find the original article here.