With continuous integration (CI), you can reduce the risk of experiencing big problems just days before you're supposed to deliver the newest version of your product. Because code is integrated into a shared repository several times a day, each check-in can be verified by an automated build, allowing testing teams to detect different kinds of problems early. So how do you, as a performance testing engineer, keep up?
From my experience working with several performance engineering teams, I've come up with a test strategy that lets you automatically detect the exact moment someone enters a line of code that impairs system performance, with a record of the precise change that caused it.
This is very different from doing load simulation for acceptance testing, where you simulate the whole load scenario with a test infrastructure that's similar to your production environment. This method isn't a replacement for that; it's a supplement.
The best part of this approach is that you can do this with free, open-source tools, and you can get started without too much effort.
Choose the right tools
Choose a CI-friendly tool that allows you to easily compare versions and detect differences with your Git repository manager. I’m a proponent of Apache JMeter. It's great for load testing, but your tests are stored as XML.
For CI purposes, it would be better to use something based on code or simple text; that makes it easier to compare results and to detect differences. The open-source tools Gatling and Taurus are ideal for this kind of test.
Consider test levels
Load simulations are end to end, simulating the actions that the browser does (the user interactions). These tests are not that easy to maintain, because they're sensitive to changes in these HTTP interactions (when dealing with web-based systems). For CI, a better tactic is to automate the API layer, simulating the REST calls.
These tests are simpler and cheaper to prepare and maintain, yet you can obtain valuable information from them faster than by doing load simulations. Again, this is not a substitute for load simulations; it's a complement. So consider running load simulations from time to time as part of your continuous performance testing strategy.
Build the correct test infrastructure
As in any load test, your infrastructure should be exclusively for that test; otherwise, the results won't be reproducible and it will be more difficult to detect false positives. The more similar the test infrastructure is to that in production, the more accurate the results will be.
But if you don’t have such a test infrastructure for your continuous load tests, don't worry. Actually, it could be better to run your tests on a scaled-down infrastructure. By doing so, you won't need as many machines to generate a load that comes close to the breaking point. Plus, it's easier to learn about the system when it is running under its limits.
Get the frequency and timing down
Test the most important things earlier and more frequently. You cannot test everything, since that's expensive to create and maintain. The key is to prioritize and keep a reduced number of tests. From the total group of tests, select the most critical ones and put them in a different stage, earlier in your pipeline, and run that for each build. Then let the full regression test suite run once a day.
Create the load scenario and assertions
Which performance tests will you run continuously? When talking about load simulations, you must think about how people will use the system and try to match that. In this case, do something similar by thinking about how the API is going to be used according to the user interactions. You can get information about that with the help of developers, by analyzing the access logs.
Another approach is to try to hit a fixed load that is close to the breaking point of the test infrastructure. Then define your assertions according to the results you get from the initial executions. With this approach, you can be sure that your CI will tell you which change generated a degradation as soon as it occurs.
Now get started with your continuous performance testing
There are many other aspects to consider; for instance, what will collaboration look like? Who will take responsibility when bugs appear? How are you going to research those bugs (monitoring tools, logs, etc.)?
But in the meantime, this advice should help you get started on the continuous-testing path.
For more on the challenges to effective performance testing in continuous integration, see Federico Toledo's presentation at Agile Testing Days USA in Boston, June 25-29, 2018.
Keep learning
Take a deep dive into the state of quality with TechBeacon's Guide. Plus: Download the free World Quality Report 2022-23.
Put performance engineering into practice with these top 10 performance engineering techniques that work.
Find to tools you need with TechBeacon's Buyer's Guide for Selecting Software Test Automation Tools.
Discover best practices for reducing software defects with TechBeacon's Guide.
- Take your testing career to the next level. TechBeacon's Careers Topic Center provides expert advice to prepare you for your next move.