The year was 2013. Candy Crush was at the top of the App Store, Grumpy Cat was making rounds on the Internet, and I was introduced to a better way to test software that doesn't involve using test cases.
My software testing days started in 2008, when I began working for a small startup in customer support. I was able to suggest new functions to be developed, and I had the opportunity to test these new features as they rolled out to production.
Through the years, as we added more complex features, finding and reproducing bugs became more difficult. I began reading about how others test software, and I quickly came to the conclusion that it was not a profession I wanted to pursue. Everything I read online described a waterfall development approach where developers sent the requirements to testers weeks—or months in advance, and the testers were expected to think through every single test case they should execute once the developers finished.
Fast forward to 2013. I reached out to a friend who had recently joined the quality assurance team in a customer support role at a progressive software vendor. I asked how he liked his new role, and he exploded with excitement. When I asked how he could be so excited about testing through a backlog of hundreds of test cases or writing test cases for new functionality, he responded with, "We don't do that; we do exploratory testing."
I joined that team and have thrived without test cases ever since.
Caveat: Context is king. I do not expect that all industries or even software companies can adopt everything I do, but I encourage you to open your mind, challenge your beliefs, and see if there are any takeaways you can apply to your own situation. (For example, the software I'm responsible for testing does not put anyone's life in danger if it were to fail.) With that in mind, here's my advice.
Work toward a high-level understanding of the software you are testing
This is the biggest factor needed to ditch the mountain of test cases you and your company have accrued. Your team's mindset has to change from being given exact input and outputs—designed months or years ago—to exploring the system for yourselves. There are many ways to accomplish this.
Within my company, we have a self-paced training video series for new team members that covers customer-facing product features, along with administration configuration pages. This has allowed our testing team to approach testing from a systems-minded perspective.
Because we look at how the system operates as a whole, our team has a greater understanding of how different areas of the software influence one another. Having a broad understanding of the system gives a great launchpad into deeper learning and exploration of new or existing features within the system.
[ Partner resource: Agile Testing Days USA conference ]
Create a feature map
The feature map is another tool we use to get ourselves into that high-level understanding mindset. Our map consists of a list of all pages of the system with a brief description or list of links/features that are available on that page.
New team members can use the map to become familiar with the software, or they can reference it when exploring a new area of the system; this helps them visualize the entire system.
We can also use this tool to regress the software for code version upgrades, new infrastructure migrations, or risky site-wide changes such as user permission updates. This simple document doesn't have much detail on the specifics of how the features work; we use it as a tool to spark testing ideas, or as a reminder of the scope of certain features.
Achieve a deep understanding of specific features within your team
Once a new team member has gone through the onboarding process, the real fun begins. Around week three, new testers join a product delivery team consisting of developers, a designer, a product manager, and testers. Within this team, the testers are an integrated part of an agile team that participates in sprint planning, refinement meetings, and retrospectives.
On these delivery teams, we add new functionality to the system through new features, integrations, refactors, and bug fixes. In each of these areas, both developers and testers work together to refine user stories, establish acceptance criteria, and give clear technical direction as to how to solve the problem.
The refinement meeting is also a great place for testers to challenge acceptance criteria and to have conversations with developers in which they go over the specific areas where they plan to apply exploratory testing. When you do this, it is possible to find bugs before they are even coded, which is a huge efficiency gain.
The ability to identify complex or problematic areas comes from knowing the new feature that is being built, and by being involved in developer discussions. If you know that the developer is planning on solving the problem with an asynchronous vs. synchronous solution, you can change the way you test the feature. Gaining an in-depth understanding of the new feature or functionality allows the team to deliver higher-quality software by identifying and fixing those not-so-obvious defects.
[ Also see: 5 effective and powerful ways to test like tech giants ]
Create good documentation for things that are not straightforward
The majority of the features in well-designed software are very straightforward and don’t require documentation to understand what the product is supposed to do.
Having test cases for simple tasks like these makes no sense. However, there are certain things that may require well-written, easy-to-follow documentation. The question I ask my team when determining whether something needs documentation is: "Did you need to have a conversation with anyone in order to complete the testing?" If the answer is yes, you probably need some sort of documentation.
Some examples of this would be when testing complex batch jobs, third-party integrations, hidden features that may not be listed within the software, or technical testing tasks.
Our test team has a wiki where we share and update information on how to test these complex areas. The idea is that once one tester has figured it out, that tester should create documentation to walk any other tester through it without needing deep knowledge of the feature.
Write automated tests for your critical paths
One of the main reasons test cases became the standard in most organizations is that there are critical paths in our software systems that should always work. These are areas that our users rely on heavily and that should behave consistently. For these critical paths, I recommend having some sort of automated test coverage.
This is not a job for your summer test intern to take on. To write reliable, sustainable, consistent automated tests, rely on your developers for guidance, or someone who has had proven successes in test automation. This is not easy to get right, but once you have a good, reliable suite of automated tests, you will have higher confidence in the quality of your releases.
Targeted regression testing
In 2014 our full regression testing cycle took one team member two weeks to complete. Those were the days when we had one release every two months, and we would test through our feature map. Much has changed since then.
Now we deploy major features on a two-week cadence, and we can deploy bug fixes on any day, at any hour. With the use of automated tests for our critical paths, we still do targeted regression testing. Prior to release, we have a code freeze, during which no new code can be added to our proposed release code branch.
During this time the test team works together, reviewing the features and fixes that are going to be released to assess the risk of each item. We do quick, targeted regression testing to ensure that there were no code merge issues, and that what will be deployed is consistent with the previous testing done by the tester on the feature team.
To track what's been tested, we use test sessions rather than test cases. While a test case is a step-by-step, pre-designed list of actions that are intended to check that a feature behaves as intended under the given parameters, a test session is an artifact created by a tester while doing exploratory testing.
A test session is associated with every user story or bug ticket. We track what was tested, who tested it, in what environment, and with what test data. We also use these test sessions to communicate everything we tested to mitigate the risks. The idea is that when the ticket is ready to be released, any member of the team can review the session and have a solid understanding of what has been tested.
Try it for yourself
I don't hate test cases. There are times and places where a solid set of test cases makes a lot of sense. I would also never tell testers that their work is less valuable because they are using lots of test cases.
But I would tell them that there are more efficient ways to test a bug or a feature. If you don't believe me, try the techniques I outlined above for yourself. You'll find that it's a far better option than having to execute a batch of test cases.
Keep learning
Take a deep dive into the state of quality with TechBeacon's Guide. Plus: Download the free World Quality Report 2022-23.
Put performance engineering into practice with these top 10 performance engineering techniques that work.
Find to tools you need with TechBeacon's Buyer's Guide for Selecting Software Test Automation Tools.
Discover best practices for reducing software defects with TechBeacon's Guide.
- Take your testing career to the next level. TechBeacon's Careers Topic Center provides expert advice to prepare you for your next move.