Skip to content

The Falsifiability Approach To Our Software Testing Process

Author: Enrique De Coss

Last updated: October 1, 2024

falsifiability in software testing
Table of Contents
Schedule

As QA Engineers, we create our test cases/test scripts based on customer-centric requirements; sometimes, we further simulate customer data to support specific scenarios (inductive reasoning). For example, consider the Christmas season, the number of holiday movies, seasonal shopping, and figurines. 

With all this data, we could inaccurately conclude that Santa Claus exists through inductive reasoning. Following the same approach to software testing, we gather evidence to ensure our testing process is correct and all our test scripts and test cases are right/green.

 

Inductive Reasoning vs. Deductive Reasoning

Inductive reasoning can prove helpful in helping to identify that everything is ok; it needs the counterpart of deductive reasoning and to test the scenarios to uncover potential failures. Deductive reasoning flips inductive reasoning and creates a hypothesis that challenges the initial approach. Taking the previous example that Santa Claus exists, we could look for evidence that Santa Claus does not exist. We could survey whether people had seen him or look for where the presents come from that are placed under the tree, which would help disprove his existence.

According to Popper (1959), theories must be phrased and presented to make them falsifiable. It does not mean that approaches have to be proven wrong, but it simply allows further testing and may be false.

 

Bohr and Einstein Stress-test About Quantum Mechanics

Niels Bohr and Albert Einstein were among science’s most significant intellectual rivals; they engaged in public disputes about quantum mechanics. Specifically, the uncertainty principle states that it’s impossible to determine subatomic particles’ exact position and momentum. Bohr supported the uncertainty principle, but Einstein opposed it.

The disagreement between Bohr and Einstein is what several people called “progress in science.” The intellectual chess game between Bohr and Einstein was beneficial partly because they were masters of the steel man technique (building the best form of the other side’s argument and engaging with it). It’s a simple idea but incredibly difficult in practice. We’re most interested in being the winner rather than being correct most of the time.

In terms of stress-test, similar to Bohr vs. Einstein, we identify in software testing that every person knows that they don’t know enough about the applications or services to be developed. Sometimes developers feel they understand complex phenomena with far greater precision, coherence, and depth than they do. To be completely honest, when we perform testing activities on the prevalent applications, we can achieve the expected outcome for our customers.

 

If You Don’t Know Where You Are Going, You Might Not Get There.

Most of our decisions in life are based not on evidence but sometimes on hunches and leaps of faith. For example, we often launch new products or try a new marketing approach without a single test. 

We conduct tests not to prove our products are wrong but to confirm what we believe is true. Unfortunately, speaking from experience, I’ve seen that some testing conditions have been falsified to report positive results.

According to Feynman, “If it disagrees with the experiment, it is wrong.” In a well-designed test, outcomes can’t be predetermined. In a proper test, the goal isn’t to discover everything that can go right; instead, the real goal is to learn everything that can go wrong and find the breaking points. Rocket scientists try to break the spacecraft on Earth to reveal all its faults before revealing themselves in space.

In a proper test, the goal isn’t to mark all the aspects that can go right; instead, our goal must be to discover every element that can go wrong and find the breaking points in our applications or services.

The Challenger disaster, the explosion of the U.S. space shuttle orbiter Challenger, shortly after its launch from Cape Canaveral, Florida, on January 28, 1986, claimed the lives of seven astronauts. The immediate cause of the accident was suspected within days and was fully established within a few weeks. The severe cold reduced the resiliency of two rubber O-rings that sealed the joint between the two lower segments of the right-hand solid rocket booster. (At a commission hearing, Feynman convincingly demonstrated the loss of O-ring resiliency by submerging an O-ring in a glass of ice water.) Under normal circumstances, when the shuttle’s three main engines ignited, they pressed the whole vehicle forward, and the boosters were ignited when the vehicle swung back to the center. However, on the morning of the accident, an effect called “joint rotation” occurred, which prevented the rings from resealing and opened a path for hot exhaust gas to escape from inside the booster.

Extracted from Britannica encyclopedia

Software testing helps turn unknowns into knowns. If we want to get an accurate test coverage of our application or services, we must apply the falsifiability approach to each test. In our spacecraft example, the testing conditions often aren’t identical to the actual launch; instead, they can try to mimic as closely as they can (gravitational forces, weather, etc.) and identify all possible flaws.

 

Falsifiability in Software Testing and Exploratory Testing

Exploratory testing is an approach to uncovering risks. As QA engineers, we must navigate the applications and identify pertinent information in similar circumstances, just as the consumer will do. As rocket scientists, we must explore, investigate, and uncover the possible flaws pre-launch.

An exploration session can highlight the most important issues for the developer. Additional sessions with small groups can reveal more problems and show the influence of exploratory testing on extensive automation or functional testing. 

Let’s examine some ways of using exploratory testing and falsifiability in software testing.

 

Emphasis: Working With Actual Customers’ Circumstances

As an exploratory tester, your goal must be to provide value. To do that, we need to choose the most significant customer scenarios to explore our applications. There’s no right or wrong answer on what to test first, but some are usual candidates related to the more risky flows.

Some UI testers start looking at input fields. It’s like “fill in the blank” or like testing Google search. So, what relevant things could we enter here to identify some flaws? 

Some questions need answering if we want to accomplish customers’ circumstances:

  • How professional is our testing operation?
  • Are we good enough to deliver high-quality applications/services?
  • What does the quality look like for our customers? 

The agile teams must understand the customers; more specifically, the product owners must share the vision of the product and the outcome with the final users (are we good enough to deliver high-quality applications/services?).

We should maintain an open channel with our product owners and evaluate multiple scenarios (what does the quality look like for our customers?); keep in mind what is delivered is fit for purpose and meets the customer’s expectations. As agile teams, we should change and adapt. For example, if our customers’ expectations change, we should adjust our testing scenarios (How professional is our testing operation?). 

 

 

Explorer: Identify Breaking Points

As we mentioned above, the goal of our test isn’t to mark all the aspects that can go right; instead, as testers, we must discover all elements that can go wrong and identify the breaking points in our applications or services. Reveal all the flaws during our testing phase before the faults reveal themselves to our customers.

Tweak around a little, and we will see it. Identify the calls and operations, inputs and outputs, and some requests and responses. Any exceptions, perhaps missing dependencies. We need to understand where we are and the dials in the application so that we can pivot while testing. This approach requires exposing every component to a failure situation, similar to Chaos Engineering’s case.

Also, look around for those missing parts; should we verify those missing flows or the flaws in our predefined happy paths? Testing must help turn unknowns into knowns; each test case must be tested under similar conditions to our customer’s needs and tweak the testing conditions a little.

 

Navigation: Usability Of Our Applications

There is some excellent material out there related to usability. However, there is no particular way to run software usability testing. Instead, the most helpful insight for the application is the way to go, which can vary depending on the company. Therefore, we should consider its objectives and preferred outcomes when deciding which usability testing methods to use and how best to approach the task.

 

Final Thoughts

A successful test automation strategy must adapt to our current customer needs and team capacity. Consequently, we must avoid generic approaches and ensure we include an element that can go wrong. Those will help your existing team, and your customers uncover potential failures.

Enrique De Coss

Enrique A. Decoss is a Quality Strategist with a focus on automation testing teams. A certified Scrum Master, Tricentis Tosca Certified Automation Architect, and Pythoneer, he focuses on web programming, implementing API testing strategies, as well as different methodologies and frameworks. Enrique is currently at FICO but can be found sharing his knowledge on LinkedIn and Twitter.