TechBeacon’s Essential Guide to AI and the SOC lays the groundwork for choosing an artificial intelligence system for your organization. It explains why these tools are essential for today’s distributed systems, explains how AI is being applied to security, and provides criteria for considering AI-powered tools for your security operations center.
But choosing an AI tool is only the beginning of your journey. Deploying an AI system and ensuring its effectiveness can be challenging. A poor implementation not only will fail to achieve the desired results, but will also burden already overtaxed security teams with an even bigger workload.
Here are four best practices for selecting, deploying, and monitoring an AI system for the SOC to ensure its effectiveness—and even improve it over time.
1. The math matters, but the use case matters more
A large US company my organization worked with was having problems with an insider threat. Unfortunately, the company had spent a year and $1 million on an analytics deployment that failed to detect the attacks. When we implemented our system with a proper set of best practices, we were able to not only identify the original pair of engineers the company had suspected, but also 11 additional people it didn't know about, including eight nation-state attackers based in China.
The first system failed largely because the math wasn’t correct for the job. If there is a misalignment between what the math tries to do and the use case you're focused on, then it doesn't matter how powerful the algorithm is. For the best outcome, define a use case first then use that to test the AI capability. I recommend using an internal red team or pen-testing group to evaluate the system, simulate your high-priority use cases, and ensure that the AI system (and therefore the underlying math) provides the coverage you need.
2. AI should produce fewer alerts, not more
The job of AI and automation is to consume vast amounts of data and filter your environment into areas where you don’t need to look for security risks and areas where you do, thereby reducing the amount of work your SOC needs to do manually. If that's not an up-front goal of your implementation, then the AI system is not going to be helpful; it will be like adding another tool to a team that probably already has too many.
When you’re evaluating your system, it’s important to measure the amount of work that it's saving your SOC team. A simple way to do this is to count and compare the number of tickets generated, their accuracy, and the resolution time, both before and after the introduction of AI. Ultimately, the AI system should increase your SOC team's efficiency.
3. Tie automated responses to measured risk values
A vice president at a defense contractor went into the office one Saturday afternoon, plugged in a hard drive, and started copying a bunch of files, including some highly classified documents. Fortunately, the firm's AI system recognized a high-risk data theft in progress and alerted the security team—even though it was during a weekend. The company intercepted the vice president before he left the building and recovered the hard drive.
As this incident demonstrates, risk is not binary— and your AI output shouldn’t be either. The vice president had never come in on a Saturday before, but that alone wasn’t a red flag, because he could have been putting in extra hours to catch up on work. But the fact that he accessed file shares he didn’t normally use and then viewed and copied highly sensitive files indicated a pattern of increasingly risky behavior.
An effective AI deployment uses security analytics to measure risk in a probabilistic manner and then generates a risk assessment score, letting you distinguish between the important stuff that demands immediate action and things that can wait until Monday morning. By integrating these measured risk values to generate an appropriate automated response, you can maximize the impact of your AI-based tools.
4. Use meaningful metrics
The Hawthorn effect says that whatever you measure, you can optimize. When you deploy an AI system, it’s critical to define meaningful operational metrics. Ideally, these are things you’re already reporting, using language the board or CEO understands. It’s tempting to speak in the language of AI and statistics using terms such as “statistical false positives,” “precision,” and “recall.” But these terms, while valid in the world of AI, do nothing to help company leaders understand the ROI they’re getting from the AI system in the context of the business.
Instead, use language that highlights desired outcomes. For example, a healthcare company we worked with reported on “incident alert accuracy,” which increased from 20% to 90% after deploying AI. A statistician would call that “statistical precision,” but that wasn’t the right language for this company, nor did it illustrate how this result helped the team use its time more effectively.
Metrics are ideally used to validate the effectiveness of the security team, not the AI system, so make meaningful connections between what you’re measuring and how it improves the performance of the SOC.
Be smart about AI in your SOC
AI by itself does not magically make your SOC more efficient. You need to deploy the AI efficiently. These best practices for choosing, testing, automating, and measuring will help ensure that you are getting the most out of your analytics system.
Keep learning
Learn from your SecOps peers with TechBeacon's State of SecOps 2021 Guide. Plus: Download the CyberRes 2021 State of Security Operations.
Get a handle on SecOps tooling with TechBeacon's Guide, which includes the GigaOm Radar for SIEM.
The future is security as code. Find out how DevSecOps gets you there with TechBeacon's Guide. Plus: See the SANS DevSecOps survey report for key insights for practitioners.
Get up to speed on cyber resilience with TechBeacon's Guide. Plus: Take the Cyber Resilience Assessment.
Put it all into action with TechBeacon's Guide to a Modern Security Operations Center.