When I worked on monolithic codebases, rules were very important. They were the only real way to stay sane in a big codebase. When everyone is working on the same thing, uniformity is key to ensuring that quality doesn't drop over time.
As a consultant I was often tasked with making sure that developers wrote the same kinds of tests, in the same way, to ensure that things didn't break and that it was easy for new team members to see how things are done.
But after nearly two years working in a microservices architecture, I've learned to break a lot of my own rules. When working with a large set of small, diverse applications, the rules just don't seem as important.
A microservices ecosystem pairs well with a more pragmatic approach to testing. Writing microservices introduces new opportunities to test services in different ways and to shake off some of the testing dogma we've accumulated from dealing with monoliths.
Here are some practical tips I've found helpful—and the rules I broke along the way.
1. Not all services need to be tested the same way
While working on monolithic codebases, we had to make sure every feature was tested in the same way. Once upon a time, we wrote a new feature and decided it didn't need a UI test, so we didn't write one. Somehow the next 10 features also got written without UI tests. And then there was a bug in production. So we created a rule for ourselves: Every feature must have a UI test. And more rules: Every class must have unit tests, and every line of code must be covered by a test. Every test must run in under 20 milliseconds. Every test must be written first.
And these rules helped. They helped us keep our giant codebases maintainable. They helped us to ensure that our quality standards were upheld. But they slowed us down.
Having small, separate services means you can use different approaches depending on the requirements for each service. In a microservices architecture, the kinds of tests you write need only be consistent within a small codebase. Different services will have different risk profiles and implementation approaches. They'll change at varying rates. While one service may need a large number of UI tests, another may require none. Yet another service might not need any automated tests at all.
It’s worth noting that it's still really helpful to ensure that the tests for all microservices are run in the same way. The simplest way to achieve this is to use a tech-agnostic tool such as a Makefile or a Bash script to run your tests. That way every service can be tested with "make test" or by running "test.sh." This will make managing your deployment pipeline much easier.
2. Slower tests are fine
When building a monolithic application, you need to be conscious of how long your tests take to run. The longer they take, the more time is required to get a change into production. This has led us to focus most of our testing effort on writing unit tests that don't talk to any slow parts of the system (e.g., to a database).
The problem with these kinds of tests is that, while they're really fast, they don't test that the units actually work together to produce a working feature. In addition, they couple the tests very tightly to the current implementation. This makes refactoring really difficult, because you need to change dozens of tests to change the implementation, and it's hard to get feedback about whether you've done it right.
Microservices are smaller, so we don’t need to worry as much about how long our tests take to run. Even tests that take two seconds are fine, because you'll have a hundred of them rather than a thousand.
This gives us the freedom to reconsider what a unit of our application means. We can view a unit of our system as something much less granular. I often find that an API endpoint or a message handler is the most useful unit to test. These tests, though slower, provide high confidence and don't stand in your way when refactoring.
3. Services can approach their contracts differently
The biggest complexity in a microservices architecture is the collaboration between services. The testing approach for each service needs to take this collaboration into account, but there is a broad spectrum of how each service might do this. For some, it will be enough to use a simple HTTP-level mocking library such as nock to provide mock responses from other services. Where communication happens via message queues, a similar kind of mocking is possible by publishing messages to fake queues.
Some services might require a more comprehensive contract-testing tool such as Pact to keep track of and verify a large number of contacts. Some services may require a formal API testing and documentation toolset such as Swagger.
4. The QA role can be fluid
It's very common for teams working on monolithic codebases to have team members dedicated to the role of quality assurance (QA). They test every feature before it reaches production.
For teams working in a microservices environment, the role of QA can be much more fluid. Some teams will be working on services that require a dedicated QA role, due to their complexity or prominence. Other teams will be able to share QA activities among the developers producing the code, without the need for a dedicated team member, because they're working on services that lend themselves to this approach. Another interesting pattern would be to have dedicated QAs focused on the ecosystem as a whole, rather than being assigned to specific teams.
5. You don't need nearly as many UI tests as you think
Sure, UI tests can be helpful. But they can also be a real pain to maintain and debug, and they often don't find the bugs they promise to find. The beauty of a microservices architecture is that it's easy to run UI tests for those services that need them, without slowing down the deployment cadence of your other services.
You can decide which services require UI tests and deal with the fact that those services will be slower to deploy. Because those services actually benefit from UI testing, the benefit will outweigh the cost. (Plus, you'll have a smaller, faster suite of UI tests, because the service is smaller.) Other services can still be pushed to production quickly, without having to wait for slower or less reliable tests.
Go faster
Breaking the rules over the past two years has had one major effect: I've been able to get more done. By understanding which services and features need which kinds of tests (and having the freedom to do so), I've been able to get code to production much faster without compromising the quality of my work.
Keep learning
Take a deep dive into the state of quality with TechBeacon's Guide. Plus: Download the free World Quality Report 2022-23.
Put performance engineering into practice with these top 10 performance engineering techniques that work.
Find to tools you need with TechBeacon's Buyer's Guide for Selecting Software Test Automation Tools.
Discover best practices for reducing software defects with TechBeacon's Guide.
- Take your testing career to the next level. TechBeacon's Careers Topic Center provides expert advice to prepare you for your next move.