As a researcher for the Micro Focus software security research team working on Fortify, I need to keep up to date with vulnerabilities in Java frameworks. Earlier this year, the Spring Expression Language (SpEL) injection vulnerability, found in the Spring data framework, caught my attention. I was curious about this vulnerability, since I worked on similar vectors in Apache Struts 2 framework.
I took a look at the Spring source code and quickly identified a method where untrusted data that could be controlled by an attacker was being evaluated as a SpEL expression.
Then came my déjà vu moment. It appears that the Spring developers were recreating bugs similar to those introduced by Apache Struts developers a few years ago—that is, evaluating untrusted data as an Expression Language expression. But there is a significant difference, probably learned from those failed attempts in Struts 2: They are moving away from SpEL, where possible, and limiting the power of SpEL expressions to a small subset of secure features where SpEL expressions still need to be used with untrusted data.
Here's some background on this sort of vulnerability and how open-source providers are unwittingly repeating a pattern that could make the vulnerability worse, as well as a better approach to fixing what seems to have become a widespread problem.
The threat: Remote code execution
An remote code exection (RCE) allows an attacker to have complete control over the web application server. You can run any code on that server with the same privileges as the application server. If you’re attempting to access resources from those servers to, say, mine cryptocurrency, you can install miners on those servers and begin mining bitcoins.
That, of course, depends on the attacker’s goal. If attackers are after sensitive data, as in last year’s Equifax hack, they will use their foothold in the network to pivot into the internal network and look for any assets they can leak or sell.
After breaking into a server that has local credentials to access a database, they can read those credentials and gain access to the database, compromise it in many ways, and pivot into other servers or elsewhere on the company’s intranet.
Cryptocurrency: A hacker's most valuable target
In the old days—i.e., a couple of years ago—hackers were simply trying to access data. That was the most valuable asset to be stolen. A data breach was followed by attempts to sell that data on the black market or to blackmail the targeted company. But the target seems to be changing these days. With the growing use of cryptocurrency, it’s more immediately profitable to mine for bitcoin or other cryptocurrencies.
Yes, there are legitimate uses for cryptocurrency. The point is that it’s out there in abundance, and it’s providing a new target for malicious hackers. Anyone can open an anonymous wallet, and no one knows who’s behind it. Skilled attackers with the right level of access can mine bitcoins from your server and move them to their own wallets.
And here’s the kicker: Even if you discover the hack, there may be little you can do about it. You may be able to see that your server has been compromised, you might even discover the miner and his or her wallet ID, but you’re unable to know who the wallet owner is, and you can't recover the theft by removing the funds from that wallet. Furthermore, on systems that support bitcoin, you can't shut down access to the wallet. This anonymity makes the opportunity very attractive from an attacker perspective. There is no central bank or agency in control.
Remember the WannaCry ransomware attack? Millions of computers were hacked. Instead of looking for data, the attackers froze their victims' operations and demanded ransoms payable in bitcoins. If you were attacked, you paid into a wallet ID. With cryptocurrencies, even though the police may know the wallet ID, the only way they can know the attacker’s identity is in the rare case that the attacker can be bound to the wallet, such as by withdrawing cash from a a Bitcoin ATM—and how often is that going to happen?
Is history repeating itself?
After identifying the code evaluating user-controllable data as a Spring EL expression, I was able to create a proof of concept (POC) to show that it was possible to compromise a server running Spring websockets. The POC paid off. It was indeed possible, and I reported the issue to the Spring team.
The Spring websocket is a technology normally used by browsers and servers to exchange data in real time, and any application using Spring websockets with the default STOMP broker were vulnerable. This was actually not the only SpEL injection; another one was recently found in Spring Data and Spring Webflow, and others have followed, such as the recently disclosed Spring Security SpEL injection.
In the past, the industry saw a similar trend with the Struts 2 OGNL injection, where researchers focused on a specific type of bug and started reporting it extensively for a given project. This is not necessarily bad, since it offers an exhaustive review of the source codebase by security experts.
Apache’s first version of the Struts framework, Struts 1, didn't use the OGNL expression language. That was introduced in the second version of the framework, Struts 2, and people began using it widely across the framework. The problem: It was overkill—and dangerous—to use OGNL to replace simple things such as bean property setters. In doing this, people exposed many OGNL expressions to external or user-controlled data. For example, the request parameters were processed as OGNL expressions, which opened the framework to OGNL injection and ultimately to a remote code execution (RCE) vulnerability. Basically, the APIs were reading user-controlled data and evaluating the input as OGNL expressions.
The power of EL expressions
EL expressions are powerful. They are like a simplified version of a programming language such as Java. They allow you to invoke methods, assign variables, and do most of the things you can do with Java. You can even bridge into Java and execute Java commands. Once you control one of these EL expressions, you can run arbitrary code remotely—in other words, you have an RCE on your hands.
The first approach the Apache team took to fix the OGNL injection issues focused on the public exploits and how they were executing the payload. They tried to prevent code execution by forbidding access to some of the objects used in these payloads. But attackers bypassed those protections by using different objects to achieve the same goal.
And that’s not the best way to fix this kind of vulnerability.
A better approach
When I reported the SpEL injection vulnerability to the Spring team in the winter of 2018, their first solution was to apply a similar technique to what had been done in the Struts 2 case: forbid access to static methods. But after I discussed with them what had gone wrong with the similar initial fix to Struts 2, they came up with a much better approach. It involved defining different levels of privileges.
SpEL is frequently used with internal non-user-controllable data, which is fine. But it should never be used with untrusted data, where SpEL can unfold its power. However, there are cases where user-controllable data needs to be evaluated as a SpEL expression. To handle those cases more securely, the Spring team has introduced the SimpleEvaluationContext, which allows you to set the level of access to a minimum, effectively limiting the operations that these expressions can control to a very small subset of secure operations.
Here’s the warning, in a nutshell: Do not evaluate any kind of EL expressions created with untrusted data. But if you must do this, create a simple evaluator that does not allow an expression to run, for instance, arbitrary methods.
Developers, keep a close watch
Development and security should go hand in hand. Ideally, developers should not be living in their own world without understanding what is happening in the security sector. They need to know what new vulnerabilities and vectors are appearing and how their code might be affected. This is easier said than done, and certainly developers have enough on their plates, so evaluating and triaging new attacks, vulnerabilities, and vectors may seem beyond what’s possible.
But it’s a good idea to have security champions on your development teams with some time assigned to do exactly that.
For example, if the security community is noticing a trend regarding Expression Language injection in other frameworks, then it’s time to tell your team to review where and how you’re using your own frameworks. Are XML parsers being attacked with XXE vulnerabilities? Then maybe you should review the configuration of your own XML parsers.
In this particular case, devs should be asking themselves if they have something similar to OGNL in their own frameworks. If that turns out to be the case, then devs absolutely need to run some form of static analysis on their code.
Too often, developers focus on creating features they need in their frameworks, but the security community is very aware of how that framework may have been compromised. Its job is staying up to date in ways that aren't a focus for non-security-minded practitioners. The security community frequently sees patterns that apply from one framework to another, and developers should be aware of this research.
To be fair, the security community is failing to educate the software development industry in that it is not providing direct actionable advice that can prevent some of these vulnerabilities from being introduced in the first place.
But, in the meantime, if there’s something big in the news, developers should ask themselves if they have constructs or coding patterns similar to those that were exploited in the attack.
Rely on static testing
Be aware of the latest attacks on frameworks similar to those your team is using. Try to be up to date. Use SAST. If the community had been relying on modes of testing that do not analyze the source code directly, even pen testing, this RCE vulnerability would not have been found.
The only way to go is with code review and static analysis. In the case of the SpEL injection vulnerability I found, application security testing should add a new header to the web socket message—a header that is not normally used. Note that if you are inspecting the traffic with a dynamic application security test (DAST) scanner, you won't see that specific header in use and thus you won't likely inject payloads into that header. The only way you know that there is a header called “selector,” which value will be evaluated as a SpringEL expression, is by reading the code.
The same thing with IAST. You’re not going to see any data flow in the Java Virtual Machine flowing from a websocket message header to a SpEL sink since 1) most IAST solutions have no support for websockets, and 2) IAST only tests what it sees. If regular traffic does not exercise a vulnerable feature, it won’t be analyzed.
Don’t get me wrong. All testing techniques have their places in the secure development lifecycle, since they target different aspects of the existing bugs. There is no silver bullet for finding them all, but code coverage and static analysis are your best line of defense.
Keep learning
The future is security as code. Find out how DevSecOps gets you there with TechBeacon's Guide. Plus: See the SANS DevSecOps survey report for key insights for practitioners.
Get up to speed fast on the state of app sec testing with TechBeacon's Guide. Plus: Get Gartner's 2021 Magic Quadrant for AST.
Get a handle on the app sec tools landscape with TechBeacon's Guide to Application Security Tools 2021.
Download the free The Forrester Wave for Static Application Security Testing. Plus: Learn how a SAST-DAST combo can boost your security in this Webinar.
Understand the five reasons why API security needs access management.
Learn how to build an app sec strategy for the next decade, and spend a day in the life of an application security developer.
Build a modern app sec foundation with TechBeacon's Guide.