Skip to content

Navigating Agile Test Case Management

Author: Enrique De Coss

Last updated: March 27, 2024

agile test cases management
Table of Contents
Schedule

In an ideal world, every code deployed into production would be without defects, but we don’t live in a perfect world; humans still build software for other humans. In our practical world, testing catches those defects in a last line of defense before the software reaches the end-user. It’s a big responsibility.

I often wonder if we have the right approach regarding testing during a sprint. Currently, testing is a type of barricade, and in some cases, we see QA as gatekeepers. 

Think about how you plan testing activities in each sprint. Do developers work under the assumption that QA will catch every error? For example, suppose you are an agile tester. In that case, the central tenets of agile methodology are to begin Software Testing as early as possible in the development process, implying test case creation, functional validation, test script creation, and delivering a high-quality piece of software at the end of the sprint. Based on that premise, Agile Testers have a limited time to focus on the testing activities; in this article, we will explore a model to focus on the essential things during the sprint and add real value to our customers.

 

Understanding the Test Cases from an Agile Perspective

When we talk about test cases, we need to consider Glenford J. Myers and his book The Art of Software Testing, first published in 1979. Myers stated that “a successful test case is one that detects an as-yet-undiscovered error.” The most important consideration, according to Myers, is the creation of effective test cases. However, test case creation and completion cannot guarantee the absence of errors or related to delivering high-quality applications.

Now, let’s move to agile and understand the product backlog items on agile projects. Those items represent the work needed to complete the product/project, including features, bugs, technical work, and knowledge acquisition. User stories describe the features from the customer’s perspective as high-level information on desired functionality and goals. For every sprint, user stories on the product backlog are refined and pulled into the sprint backlog. Next, the team will agree on the acceptance criteria, proposed solution approach, and estimated effort needed to complete each story. Acceptance criteria determine when a user story works as planned

It’s a good practice for agile testers to begin writing test cases from acceptance criteria. However, while writing agile test cases, a tester will often face some challenges related to the testability of the user stories. Sometimes, changes to the acceptance criteria are necessary, leading to rethinking or completely changing the user stories.

 

Optimizing Software Testing Effort During the Sprint

We identified some challenges agile testers face: 

  • Aggressive deadlines
  • Continuous repetitive testing cycles (including regression testing)
  • Changes requested from the stakeholder or modifications to the acceptance criteria
  • No detailed documentation or lesser documentation

During the sprints, solo testers could have an enormous challenge designing quality test cases that are easy to understand, updating or creating automated test scripts, and delivering high-quality products simultaneously. My suggestion is to create a test baseline based on the Minimum Viable Product (MVP); we primarily made a Minimum Viable Test based on Risk-Based Testing to prioritize activities.

Minimum Viable Test = Maximum Test Coverage – Minimum Number of Test Cases

Frequently, testers create many agile test cases to cover all possible combinations and their set of values (100% test coverage is impossible, please refer to Chaos Theory). These combinations become unmanageably high as the number of parameters and the collection of value for each parameter increases. It results in more significant test cases being created. Still, only a portion of these could be executed during the sprint because of insufficient time and lack of test automation, which leaked several defects to the final users. 

Most of these test cases are redundant as they do not find any defect. So, it is essential to create an optimum set of agile test cases without compromising test coverage and testing time.

Our model “Minimum Viable Test” can guide agile testers during the sprint. This model doesn’t replace your current process, and this is not a variation of the “Happy Path Testing.” For example, suppose we’re going to maximize our test coverage by reducing the number of test cases. In that case, we need to focus on some risky flows, including testing tools such as codeless/low-code, exploration testing sessions, and our functional validations for every story.

Let’s look in detail at those points:

Functional Verification: Functional testing of every story is vital during the sprint; we need to validate the acceptance criteria of all the user stories to complete them. For those validations, we can take some test cases and identify those critical/risk flows. Remember, detailed tests are significant, but it is not our priority.

Automated Test Scripts: We need to include some test suites to verify our flows are still working correctly and have some validations for these new flows during the sprint. As I mentioned, some codeless/low-code tools can help us to include those faster. Still, it is not mandatory if you have another test framework or non-codeless testing tools; we need to add those new test scripts during the sprint (test automation tasks must be part of the definition of “done”).

Exploratory Testing Session: I have had a lot of success by pairing up with another tester/developer in the project. Together, we can execute the same scenario in different environments or different ways and discuss our observations. For example, if I test a web application, my colleague will test a variation of the same flow. Then we both execute the scenario and discuss the observations. Just by doing this, you can uncover many different issues, inconsistencies, and unexpected behavior. An exploratory testing session will give you good coverage of the application during the sprint.

We need to perform the same activities for the integration test (E2E). So, I want to recommend these activities for every feature; we need to write E2E tests, perhaps not too many, but we need to consider those as they bring more confidence to our testing. Integration tests are an outstanding balance between confidence and speed/expense; in other words, we should consider a user-centered E2E approach instead of mock testing.

 

 

Final Thoughts

In 2021, I don’t think anyone can argue that software testing is a waste of time; as I mentioned before, it could be challenging to optimize our activities during the sprint and deliver high-quality products with true confidence in our agile test cases.

“Software testing is not only test cases; it is about helping others to provide high-quality applications to our customers”

Enrique A Decoss, Senior Quality Assurance Manager at FICO

Testing activities can demand a lot of time during sprints, whether you are a solo tester or a group of testers working on the same scrum team. So let’s have a different approach to “less is more.” We need to maximize our testing effort during the sprint and provide valuable testing activities to directly impact our applications’ quality. 

Happy Bug Hunting!

Enrique De Coss

Enrique A. Decoss is a Quality Strategist with a focus on automation testing teams. A certified Scrum Master, Tricentis Tosca Certified Automation Architect, and Pythoneer, he focuses on web programming, implementing API testing strategies, as well as different methodologies and frameworks. Enrique is currently at FICO but can be found sharing his knowledge on LinkedIn and Twitter.