What do you want out of test automation? I ask my clients this because I want them to succeed. Without knowing their objectives for test automation, they can't.
You can define a goal by a set of gauges, called "success criteria." When your success criteria are met, you've succeeded.
In my recent article,"Why converting test teams to automation is a challenge," I made an assumption that I didn't define: What success means. Let me explain.
I rarely have the luxury of targeting my own success criteria. As a consultant, I have to learn about the goals of my clients and understand their criteria for success. Then I develop a strategy that I think will work.
Every team is different. Each lives at different states of maturity in many different aspects—process, finance, regulation, technology, and skill. To say that each should use my success criteria would be irresponsible, and I wouldn't advise it.
On the other hand, you need to know my success criteria to better understand why I see so little success with teams converting large groups of hands-on testers to automation. Here they are.
Is automation your default practice?
I agree that automating every test is a bad idea. But it's extremely beneficial to assume that the default practice is to automate every test and then force the argument for "why we should not automate this test."
What's not helpful is the continuous churn so many teams experience in attempting to define, redefine, and implement standards for what should be automated. Most teams end up wasting time they could be using for automation or learning new skills, and wind up creating a list of watered-down, incomplete, minimally viable conditions needed to use the automation abilities they currently have.
Find harmony with the question, "How are we going to automate this?" It will push you to grow your skills, abilities, and automation.
Teams that think "automate first" automate more.
Do your automated tests execute anywhere, anytime?
An automated test is a client of a system under test (SUT). A test client should be able to run anywhere, anytime. Certainly, some test clients have constraints that force the tests to run in a specific environment, but most of the time that's not the case.
We've all had defects resolved with the explanation, "It works on my machine." Running automated tests on different machines shows you which machines it works on. If it doesn't work on every system, you need to know why.
Developers should be able to run the tests on their machines and use them for debugging. Testers, business analysts, and product managers should have access to tests to explore their own questions about the application. When tests are resilient and trusted, there will be as few bumps in the road as possible to allow different people to gather information about the SUT.
In other words, force the team to work through environmental constraints by having test-client code execute on different machines.
Do tests execute without human intervention?
If you have to push more than one button to get the tests to run, you're doing it wrong.
Automation is something that happens without people. Testing is something that is done by people.
Kick off the automation. Did you have to do anything before the reports could be viewed by decision makers? Did you have to do anything before starting the tests? If so, you are not meeting this criterion.
Do your tests run frequently?
When automation is not run automatically against changing code, it gets stale and stops working. Force automation to run often. Set aside the tests that don't give credible results, and run the rest frequently.
Most of us don't have unit tests for test automation. We see if it’s right by having the application and automation signal that something is wrong. Then comes the hard part—figuring out what's wrong.
Test your automation as well as your system often, with scheduled or triggered runs of automation. Continuous integration is great for this. CI is to test automation as compilers are to code. Check out my webinar on it.
Are all executing tests trusted?
If the CEO came by your desk and asked you to explain test results, could you show her a report that is trusted? Would you feel you had to hide the truth? Would you have to massage the numbers?
Certainly there are times when you don't know why the results of the tests differ from those in a previous run. But there is a vast difference between knowing the results must be right and wondering what’s going on. Spend most of your time in that first camp by ensuring that trusted tests are the ones that run publicly.
Is 'rerun' acceptable?
If a test case fails once and you rerun it and it passes, should the second run be more trustworthy than the first? If you were interacting with the system by hand and saw something that looked like a defect, you'd try it again, right? You'd ask, "Can we duplicate this?" If not, a good tester won't let the anomaly go.
When they find what looks like a defect, good testers are like a dog with bone. They try again and again until they have reproduced the behavior. Alternatively, with strong reasoning for why the behavior should be acceptable to the business, they could drop that bone.
Good automation is the same.
Why would a 50% success rate on a rerun be okay? What's the right rate? 66%? 75%? 80%? 83%? The problem with automated tests that sometimes fail is that they are always untrustworthy.
I find it helpful to consider that trust in my testing is binary. People either trust what I do, or they don't.
A test that exposes an unexpected result 5% of the time still exposes that result. Your testing won't be trusted unless you know what happens in that 5%. And once you know what's happening, you can communicate it by either sharing what should be changed in the application or by codifying tolerance for it in your automation.
That's how to avoid reruns. Trustworthy automation is successful automation. Trustworthy testing is successful testing.
Does maintenance take less than 15% of engineers’ time?
Poor automation forces automation engineers to spend more time debugging automation code than providing accurate and timely information to customers (developers and product owners). Now, 15% is more than one hour per day. Anything over that is too high. Less than 15% should be your goal.
You want automation engineers to spend their time building out tests and related automation, not maintaining the existing automation.
Is your reporting understandable at a glance?
Make your reporting on test automation as clear and simple as possible. Managers, product owners, and developers need to quickly know whether they need to act. If they do, they can drill down to get more granular information.
The goal with reporting should be to give clear, correct, concise communication and to respect people's time and attention. Most executives think of testing as a cost center. Don't make that image any worse with complex, convoluted, overly detailed summaries of reports.
Are your resources highly available?
Don't block test automation. When automation is waiting on a resource, an environment, or a system, development of automation stops, the running of automation stops, and the verification of whether the application continues to work stops.
Imagine if developers were all blocked; how fast would that get fixed? When testing is as important as the development of new features, teams get automation fixed with the same urgency as would meet the need to unblock developers. Good automation requires testing to be a first-class citizen.
Codebases, environments, and agents should all be ready for testing all the time. (Pro tip: Teams with test environments that are always up are also more likely to have production systems that are always up.)
Can your engineers produce many tests per day?
You should be producing many test cases per day. If not, then there is a problem with:
- Commitment to automation
- Commitment to testing
- Roles and responsibilities, or
- Implementation of automation goals
Test automation engineering is a development activity. Treat it as such. Expect frequent delivery of working code.
Does someone own automation full-time on each team?
Xtreme programming expert Kent Beck has been quoted as saying, "Nobody has a higher-priority task than fixing the build." Learn more about the concept in Martin Fowler’s seminal article, "Continuous Integration."
Someone on the team must be responsible for fixing or triaging issues with automation.
When no one owns automation, automation fails, becomes distrusted, and causes more work than it's worth. Own automation like a 16-year-old owns a car bought with his or her own hard-earned money. Wash it, polish it, change the oil, dream of it while you're asleep.
Does your automation team use software development best practices?
As a software development activity, testers should be:
- Storing automation in source code control
- Maintaining existing automation
- Planning for future automation
- Fixing failing tests immediately, just as we would fix code that won't compile
- Considering technical debt for test automation just as if it were other tech debt for the application
- Accounting for automation with tasks just like other tasks
- Adhering to coding standards
- Adhering to other coding practices
Can tests be created before, or in parallel with, application code?
You can create test automation in parallel with, or even before, test code. If your team is not doing this, it is possible that they just don't have the skills required. Test-driven development, behavior-driven development, and acceptance test-driven development have been around for many years. Not only is it possible to sync up dev and testing, with the right people it's a low-risk proposition.
Size up your success
Your vision of success should be different for each team. Your team may or may not be ready for all of the criteria above. But if you can meet them, you're prepared to work in a continuous delivery world.
When you use criteria such as these, your customers will gain timely, accurate, and reliable insight into their applications. They will have high-quality applications and happy developers, testers, and product owners. Most importantly, they'll have delighted customers.
Are you assessing your automation and where it can go? Join the conversation below.
Keep learning
Take a deep dive into the state of quality with TechBeacon's Guide. Plus: Download the free World Quality Report 2022-23.
Put performance engineering into practice with these top 10 performance engineering techniques that work.
Find to tools you need with TechBeacon's Buyer's Guide for Selecting Software Test Automation Tools.
Discover best practices for reducing software defects with TechBeacon's Guide.
- Take your testing career to the next level. TechBeacon's Careers Topic Center provides expert advice to prepare you for your next move.