Skip to content

DevOps Tools and Technologies from a Quality Perspective

Author: Niranjani Manoharan

Last updated: April 20, 2024

DevOps Tools and Technologies from a Quality Perspective
Table of Contents
Schedule

As quality engineers, we wait for applications to arrive in the staging environment for testing and eventually release, but have you ever taken a moment to pause and think about what actually happens behind the scenes prior to the changes that the developer made? 

In this article, we follow code from the moment it’s committed into the version control system, through testing, and into production. 

 

DevOps Tools and Technologies from a Quality Perspective

 

When do we use Amazon Web Services? 

aws: devops tools in quality

These days a lot of companies host and deploy their applications on Amazon Web Services (AWS). What does this mean? AWS is the infrastructure used to enable end-users to request, receive, and interact with your application. So, if your application is running on AWS, then when a pull request (PR) is merged and the changes are deployed to staging, these changes are deployed to an Amazon EC2 (Elastic Compute Cloud) instance that is running a Linux/Mac OSX. AWS lets you monitor the status of these deployments.

 

Terraform

terraform: devops tools and technologies

Terraform helps define server infrastructure for deploying software. The configuration to provision, modify, and rebuild an environment is captured in a transparent, repeatable, and testable way. Used correctly, these tools give us the confidence to tweak, change, and refactor our infrastructure easily and comfortably.

Let’s define stack to refer to a set of infrastructures that are defined and managed as a single unit. This corresponds to a Terraform state file. So, with Terraform, we can define how these environments will be designed in one of three ways:

  • Put all the environments into a single stack.
  • Define each environment in a separate stack.
  • Create a single stack definition and promote it through a pipeline.

In a nutshell, the first approach yields poor results; the second approach works well for simple setups (two or three environments, with not many people working on them), and the third has more moving parts but works well for larger and more complex groups.

With the first approach, multiple environments are managed as a single stack. In this case, when you make a change in the staging environment, it impacts production down the line. For example, we turn ON a feature flag on staging; given the single stack setup, this could result in having the feature flag turned ON in production by default unless you override this configuration. 

With the second approach, since each environment is isolated, replicating the changes between environments requires vigilance and consistency. Otherwise, over time, these environments become snowflakes or isolated entities. 

With the third approach, we use a single file to create multiple stack instances in a pipeline. 

  1. A change is committed to the source repository.
  2. The continuous deployment server detects the change and puts a copy of the definition files into the artifact repository, with a version number.
  3. The continuous deployment server applies the definition version to the first environment, then runs automated tests to check it.
  4. The automated script on the continuous deployment server is triggered to apply the definition version to production.

This setup ensures that every change is applied to each environment. The versioning helps maintain consistency and sanity, especially when facing production issues. In addition, it makes it so much easier to debug! 

 

Docker

docker: devops tools

Let’s say your application needs a web server. Terraform helps configure servers, but we need some infrastructure to run these servers. In this case, we will use 2 AWS EC2 instances. One will run our Jenkins CI server, and the other will be configured for Docker to run microservices and the web application. 

With Docker Compose, you can define and run multiple container docker environments. The Compose file is like a configuration file that lets you configure your application services and start and stop services. Compose allows you to create multiple isolated environments and helps to test your changes locally. For example, you could potentially add the configuration details for the dependencies like databases, caches, web services, and more in a Compose YAML file. Then you’ll be able to create and start one or more containers for each dependency with a single docker-compose command. This simplifies environment creation for developers. 

We can also extend this use case for automated testing. You just need to define the entire testing environment in your YAML file, and viola – you run your tests and then destroy it!

 

Ansible

ansible: devops tool and technology

Now that we have our EC2 instances created, we can configure them using Ansible. Ansible is used to automate the process of configuring machines to run any processes or servers. Basically, if you need to deploy an updated version of the server on all the machines in your enterprise, then you just need to add the IP addresses of the nodes (or remote hosts), write an Ansible playbook to install it on all nodes, then run the playbook from your control machine.

Returning to our example of running Jenkins on one of our EC2 instances, we need to install its dependencies. This can be easily configured and provisioned by updating the /etc/ansible/hosts file. 

Similarly, we need to add the commands required to run our web server in our docker container using the ansible configuration file on the second EC2 instance. 

In this example, we only used two EC2 instances. However, in a complex application, we would need to provision multiple EC2 instances for databases, caches, etc. In that scenario, Ansible’s automated workflow capability helps to make orchestrating tasks easy. Once you’ve defined your infrastructure using the Ansible playbook, you can use the same orchestration wherever you need to, as Ansible playbooks are portable.

 

Grafana

grafana technology

 

Now that we discussed how developers’ code travels to different environments, what happens after we deploy to staging and start running our automated tests? How do we track and monitor our test runs effectively?

Let’s discuss how we can leverage logging and monitoring tools for testing and automation. First, we’ll address the misconception that these tools are useful only for developers. 

Let’s assume you have some automation added, and now you’re trying to share the progress with your leadership team. The number of tests automated alone doesn’t add much value other than showcasing test coverage. Usually, test automation efforts get siloed in companies. We can break those silos by leveraging metrics like the number of hours saved by automation vs. manual testing and the test pass % info being represented via Grafana dashboards. This will give more confidence to the leadership team that automation is adding value, and we can scale our testing efforts seamlessly. 

Grafana uses Wavefront Query Language (WQL), which lets you retrieve and display the data that has been ingested into Wavefront. In my blog post, you can learn more about using time series data and displaying data in a graphical format. You can also add alerts if the test pass % falls below a threshold value.

 

Kibana

kibana technology from a qa perspective

Let’s say the developer’s changes have been deployed to production, and now you’re encountering issues. How would you go about debugging? There are several tools that are leveraged by companies these days, like Loggly, Kibana, etc. We will limit the scope of this discussion to Kibana

Kibana is part of the ELK (Elastic search + Logstash + Kibana) stack. This collection consists of 3 open-source projects wherein:

  • Elastic search is a full text-based search and analysis engine.
  • Logstash is a log aggregator which collects and processes data from multiple sources, converts, and ships them to Elastic search. 
  • Kibana provides a UI to allow users to visualize, query, and analyze their data.

There may be multiple ELK nodes aggregated to support the complex architecture in a large-scale enterprise company and may also require additional components like Apache Kafka for buffering and Beats for data collection. 

 

devops achitecture example from a quality perspective

Kibana enables you to search, filter, and tail all your logs ingested into Elasticsearch. 

So, if you don’t see logs for a specific event, there could have been multiple errors starting from data collection to buffering and data processing that happens in Logstash vs. the absence of logging events in the codebase! 

Please refer to my recent talk for Test Tribe Community, where I do a live debugging session using Kibana.

 

Expanding Quality Engineers’ Horizon

As quality engineers, we test in different environments, starting with staging, pre-production, and production. If you see inconsistencies between your environments, you should be able to easily narrow down the issue to Terraform configurations or the Ansible playbook/module not updated correctly. Creating a good understanding of our observability stack using tools like Grafana and Kibana will help you improve logging and monitoring at your current workplace. Understanding how things work behind the scenes will aid in debugging issues not necessarily within the scope of the quality team. It also helps to expand your scope about how things work and are interconnected. This knowledge should help reduce your dependency on the DevOps/Observability/SRE team to identify the issue. Instead, you can help their team by identifying the issue for them.

 

Niranjani Manoharan

Niranjani is a software engineering lead for industry pioneers such as Lyft, eBay, and Twitter. A speaker and trailblazer, she was recently featured among Agile Testing Days' "100 Women in Tech to Follow and Learn From". Niranjani often shares on her Twitter, website, or blog, as well as in different conferences and podcasts she's invited to.