The right way to run a risk-assessment session: A 5-step plan
The first time I facilitated a formal team risk-assessment session, it was not pretty. The room was cold—and I don't mean the temperature. The morale of the attendees was at an all-time low.
My team had been working on a project in limited beta for about six months. But after working on it for some time, we realized there was no way it could go to production, or even full beta, without dire consequences.
It wasn't enough to let others know we couldn't do this; we needed numbers we could show to management to justify a decision we knew was in the best interests of the company—and our clients.
That's where a risk-assessment session comes in.
Defining 'risk-assessment session'
As you might guess, a risk-assessment session is a method for talking through the risks of a project, application, feature, use case, or any other subset of your life. It’s designed so that you walk away with a better picture of the risks and can form a good path forward for testing or personal growth.
It’s both a communication tool and an avenue to ask questions.
Formal sessions
There are both formal and informal sessions. The former is a team-wide event where you sit down and do more than evaluate risk; you create cohesion among your teammates. In a formal session, everyone votes on the element of risk (impact of failure, probability of failure, any other attributes) and then discusses the results.
You'll find that every element of your team (development, product, requirements/business, QA) has a different view into the ecosystem you're working with and sees risk differently. Things will float to the surface that other parts of the team weren't aware of.
By the end, you'll have a clearer picture of what the product is and the part you play in it, and your team will be more aligned and more focused.
Informal sessions
The informal session is a sanity check for the QA engineer. You look at everything you've got in front of you and evaluate where you're going next. What's the highest impact if it fails? What's the highest probability that something will fail? Where are you going to concentrate?
This gives you an idea of your path forward and, most importantly, lets you as a tester self-evaluate. Did you make the right decisions based on how the project came out? Is your assessment of risk in line with the business or development? This is a way to tell yourself that you're growing as a tester and a team member. (How to get the most out of this type of session is discussed more in depth later on in this article.)
Running a formal session
"Formal" and "informal" are just names. Each type of session can be as regimented or as casual as the team needs.
That said, let's talk about the risk-assessment session itself.
It's best to have a third party facilitate your session. You want someone who doesn't have a huge stake in the project, so she can mediate without offering too much of her own opinion, which might color the way the meeting will go.
If you do use someone internal to the team, try to make sure she can stay impartial and truly facilitate. If you're losing a valuable point of view by doing this, you should re-evaluate your choice of moderator.
After defining your terms, there are five steps to running a good session:
- Pick your features.
- Pick a rating system.
- Level-set your rating system.
- Assign values.
- Re-evaluate those values.
Let's take each of these in turn, plus a step zero, define your risks.
Step zero: Define your risks
Once you’ve got a facilitator, step zero is to define terms. The sessions are no good if everyone has a different definition of what's going on.
Define risk—the impact of failure, and the probability that failure will occur. Discuss if there are any other risks you may want to account for in your product: Are there industry-specific risks such as third-party APIs or regulatory challenges? Do you want to take into account the newness of your team?
Anything can be used here, but be careful to focus only on items that contribute to your overall sense of risk. Too many different things will find you arguing about semantics for hours.
This step can often be done in an email before your session.
1. Pick your features
After your team has a good idea of what you’ll be talking about, move on to picking the features you’ll be discussing. Depending on how chatty your team is, expect to get through no more than six large features in a one-hour session.
Picking too many things to talk about often leads to people rushing and trying to get to everything at once. Set an expectation early that this is a set of features that could be discussed, not a goalpost you must reach.
When you're picking features, look for logical groupings or a logical story to tell. If you start with the login, for example, think about where your users will naturally head next. Do they need to upload photos? Add songs? Start writing a post? This will give you a natural flow that lets your session easily move from feature to feature.
Another method is to look at personas. If you have them set up for your project, it can be valuable to discuss the risks from the perspective of your personas. Does "Tech Savvy Traci" see the feature differently than "Late Adopter Lane"? You may uncover risks that will push you to test differently.
Again, this can be done before the session. An informal poll can give you a good idea of where to start, and it can be published the day before the meeting to get people thinking about the features or personas.
List them out in a collaborative document, either a Google Doc or a whiteboard, so everyone can see what's going on.
2. Pick a rating system
This is my favorite part: figuring out how your team will represent its concerns.
If you’ve already talked about your definitions and picked your features by email or another online method, it can act as a fun icebreaker for the in-person part of the proceedings.
Rating systems are limited only by your imagination. You can use a classic 1-to-10 system, high/medium/low, poker cards, T-shirt sizing (extra large, large, medium, small, extra small), emojis, or even cat photos (happy cats, angry cats—you get the idea). Feline photos are my personal favorites, because cats can express a wide range of emotions.
If you do use images, print them out for everyone before the meeting so people can hold them up as their responses.
3. Level-set your rating system
After you've agreed upon a rating system, your next job is to make sure it's properly calibrated. Start by asking what an impact from the lowest part of your rating system looks like, then the highest.
Perhaps for login, an extra-large failure looks like "anyone can log in as anyone else." Or an inability for anyone to log in at all. A small impact might be that 10% of people, or even fewer, can't log in.
Then do the same for probability. This will probably be the most contentious part, since you need to discuss the acceptable probability of failure. Maybe a 10 for probability is that password reset fails every time for everyone. A 1 might look like password reset failing once out of every 100 attempts or users.
Once you've got the bookends defined, it's much easier to slot other impacts and probabilities into the system.
Be sure to set time limits for both this step and the one before it. These can easily get out of hand if your team is very social. It’s best to get all of these steps done in 10 minutes—15 maximum—which leaves you a lot of time for the meat of the risk-assessment session. Unfortunately, you can get trapped in the procedure and never actually get to what you set out to do. While 10-15 minutes doesn't seem like a lot of time, you can revisit this as needed during the meeting.
4. Assign your values
Start assigning values to the attributes of risk that you've defined. Ask the team to vote with fingers, cards, an app—whatever works for you. The important part is to make sure everyone's voice is heard.
Everyone has a different view of the risk of an application. Everyone sees it a little differently. These views help us become a better, more cohesive team. They are our strengths, and being able to hear from everyone will make the whole team feel valued in a concrete way.
Some discussion about the values is expected. Much like grooming, the first vote gauges the temperature of the room and starts to show where you might be missing information.
Discussion will fill in those gaps, so you can vote again with far more confidence than the first time. Again, this should be time-limited. After 5 to 7 minutes, your discussion will start going in circles and create too much swirl to be useful.
Call for a new vote, and if there’s still a lot of contention, table the feature under discussion until you can get more information about it. This will often involve assigning someone to research the feature further and gather the information that your team needs.
5. Re-evaluate your values
After you’ve assigned values to some of your features, re-evaluate the things you looked at first. Are they still correct? Has your rating system evolved as you've moved through the exercise? If it has, that's perfectly fine. Your system should be a living thing that changes as your team refines it.
You may also find that the descriptor you're using isn’t working well and you want to change, for example, from cat photos to T-shirt sizes or numbers. As long as the team feels it's getting the information it needs, it's fine to switch it up.
Once you're satisfied with the numbers you have for each attribute, combine them. This is a little easier for numbers, but if you’re using T-shirt sizes, for example, you can estimate that a large and a small together roughly equal a medium, two larges are about an extra large, and an extra large and a small are probably about a large.
For images or emojis, either place both images next to each other of pick something in the middle. Always err on the side of higher, as opposed to lower.
For numbers, the results may look like the following table:
This gives you a clear way to talk about where you're going to focus your testing efforts, and it can also give you a clearer picture of the effort that you need to expend. Do you need more resources? More time? More help?
This is a great way to show the level of effort you need to expend to get the features tested in a way that will provide the best results.
You’ll also find that your team communicates more clearly afterwards and has a better sense of cohesion. Bringing everyone together and making sure that you've got the same expectations can do wonders for a team's morale.
More about a personal risk assessment
You can use this method to check your growth as a QA pro.
First, look at the work you’ve got in front of you and jot down a quick risk matrix. You don’t need to go into much detail. Sometimes, I write down Jira story IDs in lieu of features or use cases—just enough that I know what I'm talking about.
Second, give each one a gut-feeling number for impact of failure and probability of failure. If you're feeling particularly bold, you can just jot down the riskiness of the things you're looking at.
This gives you a ranking of your items and lets you know where you should be focusing the majority of your attention. Is it where you expected? Do you look at your list and feel good about the plan you've created?
The next phase comes after the features have been tested and are rolled out. Now go back to your matrix and look at it. Were you right? Did you have to go back and re-evaluate some features you didn't expect? Did you contribute to the success of the project with your predictions?
This is a way of calibrating your gut instinct or your "QA instinct." It helps you determine if you're spending your resources in the correct places or cherry-picking simpler tasks.
It gets easier
As you practice assessing risk, it will get easier and become more natural for your projects and your QA knowledge in general.
Creating concrete representations of risks and sharing them with the team strengthens the bonds you need to do good work. In the end, the risk assessment session is a tool that you can use to improve your team's communication and your personal QA practice.
Want to know more about how to run an efficient and effective risk-assessment session? Come see my presentation, "The right way to run a risk assessment session: A 5-step plan," October 3 at STARWEST. (TechBeaon readers can use promo code SWCM for $200 off registration.) The software testing conference, in Anaheim, California, runs from September 30 to October 5, 2018.