Measure the effectiveness of your Microsoft security with AttackIQ

This blog post is part of the Microsoft Intelligent Security Association guest blog seriesLearn more about MISA.

To improve an organization's cybersecurity readiness, you need to test that your detection and prevention technologies work as intended and that your security program is performing as best it can. Research from a Poneman Institute survey found that amongst over 500 information technology and security leaders across sectors, 53 percent said they were uncertain about the effectiveness and performance of their cybersecurity capabilities.1 The reason? Even the most advanced security controls fail due to human error and configuration drift, and when they do, they fail silently. They need to be tested continuously to ensure performance. By analogy, even the best sports teams in the world need to exercise and prepare their defenses for attacks. If they don't train, they atrophy. To ensure readiness, everyone needs to prepare for known threats.

Measuring security effectiveness using MITRE ATT&CK®

The good news is that the MITRE ATT&CK framework provides cyber defenders with known tactics, techniques, and behaviors that adversaries use to conduct an attack. Today, Microsoft and AttackIQ are working together, including through the Microsoft Evaluation Lab, to automate testing using MITRE ATT&CK and a threat-informed defense. AttackIQ is a part of the Microsoft Intelligent Security Association (MISA), an ecosystem of independent software vendors and managed security service providers that have integrated their solutions to better defend against a world of increasing threats. MISA helps break down silos between security organizations to build better-combined solutions and improve the world's cybersecurity posture.

AttackIQ enables Microsoft customers to test their use of Microsoft Defender for Endpoint, Azure native cloud security controls, and Microsoft Sentinel, running adversary emulations against the security program to generate detailed data that the team can use. With granular performance data, the customer can make informed decisions about people, processes, and technology, and elevate the security program's overall performance.

Let's look at some of the ways the two companies work together.

Emulating the adversary to test Microsoft Defender for Endpoint

To validate cybersecurity readiness, AttackIQ integrates with Microsoft Defender for Endpoint to emulate cyberattacks with realism and specificity. It does so at scale and continuously, testing Microsoft for Endpoint's and -enabled technologies to generate granular data about security program performance.  

Testing Microsoft Azure and Microsoft Sentinel

In addition to testing Microsoft for Endpoint, the AttackIQ  Security Optimization Platform runs assessments and scenarios against the native cloud controls in Microsoft Azure, leveraging researchfrom MITRE Engenuity's Center for Threat-Informed Defense that maps the native security controls in Azure to MITRE ATT&CK. AttackIQ has built assessments to measure the effectiveness of native cloud controls. In addition to Azure's native controls, AttackIQ is integrated with Microsoft Sentinel, enabling Microsoft Sentinel users to test their detection pipeline and fine-tune security processes across their organization. 

Generating actionable performance data

Security teams can schedule assessments to run against Microsoft for Endpoint and Microsoft Azure as frequently as needed. Based on continuous testing, the AttackIQ Security Optimization Platform generates point-in-time and longitudinal data about security control performance, giving teams a sense of the program's overall readiness. 

Aligning MITRE ATT&CK with Microsoft

AttackIQ brings a deep alignment with MITRE ATT&CK to its automated security control validation for Microsoft's security capabilities, leveraging a deep scenario library of tactics, techniques, and sub-techniques to validate security program performance.  

AttackIQ scenarios

Below is an image of an AttackIQ interface scenario that provides a basic function check of Microsoft Defender for Endpoint. Within the AttackIQ Security Optimization Platform, users can select this scenario out of a range of scenarios within the platform to validate the effectiveness of Microsoft Defender for Endpoint. From there, the user can assign the scenario to run against Microsoft Defender for Endpoint to validate its effectiveness through their infrastructure.

AttackIQ interface scenario that provides a basic function check of Microsoft Defender for Endpoint.

After running the scenario, the AttackIQ Security Optimization Platform shows results of how well Microsoft Defender for Endpoint performed in its prevention and detection functions, tipping the customer's security team to any configuration challenges or other issues that may need attention.

The AttackIQ Security Optimization Platform also includes scenarios for testing Azure blog accounts, as the below image shows.

The scenarios for testing Azure blog storage accounts.
AttackIQ platform showcasing Data Harvesting from Blob Storage accounts.

Beyond atomic tests of how well Microsoft Defender for Endpoint works in detecting and preventing an attacker's tactics, techniques, and procedures (TTPs), AttackIQ's Anatomic Engine chains together TTPs, aligned to the MITRE ATT&CK framework, in a realistic and comprehensive adversary attack flow to run a range of adversary TTPs against an organization. AttackIQ's Anatomic Engine is designed to test advanced AI and -enabled defense capabilities like those within Microsoft Defender for Endpoint, Microsoft Azure, and Microsoft Sentinel, emulating the adversary with specificity and realism every step of the way.

How a security control or set of controls have performed against MITRE ATT&CK-aligned scenarios and attack flows.

Once tests have been conducted, AttackIQ generates reports from a single point in time, or longitudinally over a period of time, to show how a security control or set of security controls have performed against the MITRE ATT&CK-aligned scenarios and attack flows that AttackIQ has built and run. The below illustrative diagrams show how AttackIQ generates performance data for detection and prevention failures and successes for a security control.

How AttackIQ generates performance data for detection.
Bar graph of historical run rate for automated testing.

The benefits of automated testing extend beyond single point-in-time analysis. The detection and prevention results can be aggregated longitudinally to show program performance over time. With real performance data, teams can identify control failures and gaps in the organization's defensive posture, make adjustments or investments to improve performance and investigate unseen, underlying issues that may be impacting operations.  

Human performance evaluation

Why is this important? It is not just about testing technology. All our technologies are run by human teams. Human factors, therefore, play a key role in security program performance, and the process of discovering the issues that are impacting a security team requires deeper investigation than simple acts of . But if you don't test your controls, you will never know if you're having a problem.

Consider the example of a large AttackIQ healthcare customer. Automated testing revealed a security control failure in the customer's defense capabilities, and on further investigation, they learned that it was due to a lapse in a managed security service provider (MSSP) contract. The security leader investigated the issue and discovered that his large security team faced a problem with attrition due to discrepancies in pay scales. His next call was to the head of human resources to talk about raising salaries. The technology, in this case, was not the problem: the issue was one of pay, not technology management. The process of continuous security validation revealed underlying issues in human resources that had a negative impact on the team's ability to use an advanced technology effectively.

A comprehensive partnership

Security controls falter for a range of reasons, and continuous testing helps reveal areas of weakness and strength in a customer's security program. Microsoft and AttackIQ are helping make cyberspace safe and secure by validating Microsoft's security technologies through automated testing, underpinned by the MITRE ATT&CK framework. By emulating the adversary with realism and specificity every step of the way, AttackIQ helps Microsoft customers achieve their highest return on investment from the company's security products. 

About AttackIQ

AttackIQ, a leading independent vendor of breach and attack simulation solutions, built the industry's first Security Optimization Platform for continuous security control validation and improving security program effectiveness and efficiency. AttackIQ is trusted by leading organizations worldwide to identify security improvements and verify that cyber defenses work as expected, aligned with the MITRE ATT&CK framework. The company is committed to giving back to the cybersecurity community through its free AttackIQ Academy, open Preactive Security Exchange, and partnership with MITRE Engenuity's Center for Threat Informed Defense. For more information, visit their website. You can also follow AttackIQ on Twitter, LinkedIn, and YouTube

Learn more

To learn more about the Microsoft Intelligent Security Association (MISA), visit our website where you can learn about the MISA program, product integrations, and find MISA members. Visit the video playlist to learn about the strength of member integrations with Microsoft products.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

1Security Investments Increasing, But 53% Leaders Unsure of Effectiveness, Jessica Davis, Health IT Security. July 30, 2019.

The post Measure the effectiveness of your Microsoft security with AttackIQ appeared first on Microsoft Security Blog.


This article was originally published by Microsoft's Secure Blog. You can find the original article here.