Artificial intelligence (AI) and machine learning (ML) give security teams the ability to catch bad guys with the power of math. Through the use of effective analytical methods, organizations can become more cyber resilient.
With statistical learning; supervised, semi-supervised, and unsupervised ML; advanced visualizations; and other principled approaches tailored for cybersecurity, you will be one step ahead of the game.
Here are six ways AI and ML, along with analytics, can boost your company's cyber resilience.
1. Remove friction
AI and ML can remove friction in managing identities through adaptive authentication, which dynamically escalates the factors needed to verify an identity based on risk.
For example, users who log into a system or access a critical resource might be prompted for a username, password, an MFA token, or a CAPTCHA solution. With adaptive authentication, the CAPTCHA, second factor, or even the password might be safely skipped because the system determines that the risk is low that the user isn't the owner of the username and password, reducing friction for end users without compromising security.
The system does that by performing statistical analysis under the covers to determine whether or not behavior is unusual enough to warrant a request for additional actions to authenticate access.
2. Speed up threat reaction times
ML can now be used in understood and effective ways to identify potential threats in an environment, but the most powerful and sophisticated approaches do this is using a "batch" learning process. The process is required because of the amount of data analyzed by these sophisticated methods so they can detect the most advanced attacks. Today, those batches are typically processed overnight, so it's usually 24 hours before they can be analyzed by human eyes.
But soon that lag time will be removed and there will be real-time ML. Thanks to mathematical and technical innovations, security analysts will be able to monitor a dashboard and see risk scores change in real time based on events happening at that moment, or have an automated process that responds to such fast attacks as they happen.
3. Enhance your app sec team's performance
Application security suites can scan your source code to look for security vulnerabilities. Supporting these scans are teams of human auditors that are experts in security and software languages—a rare combination of skills. These human experts comb the results to find which flaws should be investigated and which are irrelevant to the security of the application.
ML is already being used to learn what those auditors do and turn those difficult-to-find skills into an automated process, which allows it to be scaled much more efficiently. Advances in both ML techniques and available data will be applied, to improve the ML accuracy rate even further, take advantage of newer and richer data and context, and handle model drift.
In a similar way, ML can be applied to the activity of threat hunters. Like source code auditing, threat hunting is a labor-intensive activity that requires specialized skills. With these AI enhancements, the threat hunter only needs to run down a specific attack pattern once. The machine can learn from the threat hunter's patterns and be able to automate the detection of similar attacks in the future. This frees up the threat hunters to look for more novel attacks, increasing their productivity by over 50%.
4. Manage identity access better
Within any organization, an entity—be it human, machine, or application—can have multiple identities. A person may have multiple email addresses, usernames, and passwords. A machine may have an IP address, a machine name, and a network name.
Resolving identities is important for determining who is doing what on any system. However, managing those identities can create a lot of manual labor, especially when designing rules to connect the dots between various entities and the data that belongs to them. AI can lighten the burden of creating those rules by helping detect the patterns used to create them, which makes the identity and entity cleanup process much easier and faster.
5. Identify potential threats faster
Searching data for potential threats can be a bear for security teams, but AI- and ML-powered search can turn that bear into a lamb.
Threat hunters are digging through vast amounts of data. They're looking for patterns. They're typing in search strings. Not only do they need search results fast—as close to human thought as possible—but because their searches can encompass months to years of data, they also require incredible performance. And like everyone else, threat hunters are usually challenged by budget constraints, so they're also looking for speed and performance at a low cost to their organizations.
At Interset, the analytics division of Micro Focus' CyberRes line of business, our team has developed a combination of analytical methods to create a threat search engine that's faster than searching traditional databases—at a fraction of the cost of other methods.
One of the reasons Google can produce fast results is that it has hundreds of thousands of machines indexing the Internet 24/7. Our threat search engine doesn't use hundreds of thousands of computers, but it is relatively quick. For example, indexing 500 million records would take a current top threat hunting search engine nine hours, whereas our threat engine can do it in 30 minutes.
In our benchmark tests, running a top-performing search engine costs $1,200 a month, while running our threat search engine on equivalent hardware costs $68 a month. The costs savings of using analytics-based approaches are critical, because they can enable threat hunters to increase their context with more data, giving them increased efficiency and efficacy, therefore driving up their productivity.
6. Secure IoT devices
Devices that are part of the Internet of Things are becoming increasingly important and a critical part of the real-world infrastructure. They can be found in all types of vehicles, throughout the electrical grid, and in more and more homes. It's a huge threat surface that is becoming increasingly important to protect.
Through the use of AI and ML, techniques used to protect organizations against insider threats and advanced attacks are being applied to IoT devices. AI and ML allow data on individual devices to be gathered and analyzed in real time so that, if a device misbehaves, it can be immediately identified. In addition, multiple clues and anomalies can be aggregated dynamically to tell security teams exactly where to look for trouble.
Machines vs. humans
Ironically, the reason that AI and ML approaches are effective in finding insider threats and advanced attacks is that threat actors are quite predictable in how they behave. Behind every cyberattack is a human, and every human, even those using AI in their attack strategy, exposes detectable characteristics of motivation, mission, and targeting. That predictability allows AI and ML to use quantitative approaches to uncover clues that can expose even the most sophisticated attacks.
Keep learning
Learn from your SecOps peers with TechBeacon's State of SecOps 2021 Guide. Plus: Download the CyberRes 2021 State of Security Operations.
Get a handle on SecOps tooling with TechBeacon's Guide, which includes the GigaOm Radar for SIEM.
The future is security as code. Find out how DevSecOps gets you there with TechBeacon's Guide. Plus: See the SANS DevSecOps survey report for key insights for practitioners.
Get up to speed on cyber resilience with TechBeacon's Guide. Plus: Take the Cyber Resilience Assessment.
Put it all into action with TechBeacon's Guide to a Modern Security Operations Center.