Incremental Deliverables From a Quality Perspective
The Journey to Testing Improvements
Pass or Fail. That is the world most people in the software quality domain live in. Things are either broken or not. Sometimes they are in between those states, however they almost always lean either to being broken or not broken and just simply misunderstood. So testing by nature has a bias towards a binary perspective.
At Duo we have a very healthy testing culture where the testing process is shared, agreed upon, and executed by the Engineering department as a whole. This healthy environment pushes most testing projects to the realm of improving the testing process rather than execution. This can prove challenging because testing in general has that binary component to it (remember pass or fail?). However using Duo engineering principles we worked to improve the quality of our Windows integration through incremental changes.
I hope after reading this example you will take to heart the challenge: When looking to improve quality, the temptation to deliver perfect testing must not inhibit the delivery of better testing. In many ways this project (like countless others in quality) was quite the journey, and every journey has a beginning so let’s look at the start.
What Was Our Destination?
To improve testing we really needed to know two things to get started. What was the current state of testing and what was the future state of testing that we desired? During our project to improve the testing of our Windows integration, we analyzed the existing testing which was, as expected, very good. The product had a healthy amount of automated tests that covered several key features. There were also plenty of thorough manual tests that covered the remaining set of features for this product. In addition there were scripts to create environments and resources to test all of the Windows Operating Systems that Duo supported with this integration.
There were of course a few key areas that we identified needing improvement. We needed to convert approximately 67 manual tests to automated tests to reduce testing time and increase the speed of release for new features. We needed to consolidate the automated test execution. Initially the automated tests required multiple manual actions to run the tests.
Ultimately we needed to get these tests executing more often in our continuous integration (CI) pipeline. If we only focused on the ultimate goal of having these tests executing in our pipeline, I don’t think we would have been able to accomplish that in any sort of reasonable timeline. Instead we decided to do it the Duo way.
How Did We Get There?
One of the principles of Duo’s engineering culture is “Don’t let perfect get in the way of the good.”
This principle is based on concepts similar to the idea of Minimum Viable Product (read more about that here). If we focused solely on delivering the ultimate goal of automated test execution in our CI pipeline we would have done just the opposite of that. It was tempting too because we knew what passing (or accomplished) looked like.
From the quality perspective we had to create stages of passing in order not to be caught in the snare of a waterfall delivery that waited for the ultimate goal to be achieved. First things first, we wanted to get automated test coverage to a better spot.
Prioritizing Existing Manual Tests
Collaborating with the Local Authentication Engineering team we prioritized existing manual tests based on the following criteria: risk for bug introduction, ease of automation, and execution time for manual version of the test.
Let’s look at each of these briefly.
Risk of bug introduction: simply put let’s make sure we test our potential vulnerabilities early and often. Areas of the integration that could fail and lead to false or unprotected authentication were prioritized. By automating these tests we are able to efficiently and consistently test the major purpose of integration, namely the authentication portion.
Ease of automation: automating some features can be more difficult than others. Rather than tackling the hardest problems first, we decided to focus on easy wins, but only after vetting them through the risk of bug introduction assessment. This allowed us to make an impact early, build momentum and product knowledge as we moved forward to tackle the more difficult test scenarios.
Execution time for manual version of the test: The final factor we considered as we worked to convert existing manual tests to automated tests was the execution time of the manual test. One of the goals of converting these tests from manual to automated was to free up our development teams and engineers to focus more on feature work. One of the challenges our engineers were facing was extensive test setup and execution time for manual tests. Thankfully our engineers were diligent to record execution time for some of the more lengthy manual tests so we had data to reference when determining which tests were the longest.
With all of these factors in consideration we began working to replace existing manual tests with automated tests. As the need would arise, we wrote automated tests to cover new features currently being developed. In all, with this strategy and a ton of collaborative effort we were able to convert 62% of manual tests to automated tests when compared to a previous release test plan.
Staying the Path
Once we reached our first stage of passing (getting our test automation coverage up), we moved to improving test execution. Without the plan to incrementally improve testing the temptation would have been to move directly to the endgame of folding these tests in our existing CI pipeline. However that would have let perfect get in the way of good.
So instead we shored up the test execution by running our tests in all of our supported environments and tackling any flakiness or errors with the existing tests. This involved improvements to older existing tests by finding new ways to interact with the application. We also performed massive refactors to better utilize the Pytest (the testing tool we were primarily using) library to reduce code duplication and make our tests easier run and more clear in what they were doing.
After we were able to get our tests passing and stable within our existing environments, we moved forward consolidating the scripts that actually ran the tests. We worked to ensure that our tests could be launched from a central location with a single script and trigger execution on all of our supported environments. We leveraged an existing PowerShell script to organize, execute, and report our tests. The idea was that once we implemented these tests in our CI pipeline, we could have a fair amount of confidence in their reporting and execution. Consolidation of test execution also laid the groundwork for what would be independent test execution on each environment we supported testing on, when these tests ultimately made it into the CI pipeline.
Reaching the Destination… for Now
With all these improvements in place the final goal was in sight. Now came the task of getting these tests running and executing in our CI pipeline. Thankfully another core principle in Duo’s engineering department is collaboration. We were able to partner with our Build and Tools team to identify and setup all of the necessary infrastructure to have our pipelines connected to these environments we were testing in. We also leaned heavily on our Cloud Security team to advise and support us in order to make sure that our soon to be integrated tests would be done so in the most secure way. Both of these teams provided insight and expertise that helped make this project possible. Finally we collaborated with the Local Authentication engineering team (the people who build and test features for the integration regularly) to gain insight on how often these tests should run.
It was agreed upon that running a subset of tests that could identify critical failures early in the development process would be helpful. So we set up a scheduled pipeline job to run the tests every weekday morning and have results reported at the beginning of the day. We were able to store these results in a long term database to maintain historical record and use that data to identify testing trends in the future.
The last leg of our journey to testing improvement was the information hand-off. Over the course of two weeks we collected information, documented processes, and presented findings to our Local Authentication Team in order to better prepare them to handle the maintenance and further improvements of their testing.
Bumps in the Road
No project would be complete without some lessons learned, and most journeys can have few detours. One of the many lessons we learned during this project was to lean on expertise, particularly when it came to our CI pipeline. Thankfully we have wonderful teams (the ‘Build and Tools’ team and the Cloud Security team) that were able to assist us with the setup and securing the integration of our automated tests into the existing CI pipeline. Involving these experts earlier would have been to our advantage.
Another major lesson we learned was that your first idea and at times your second and third idea may not be the best idea. When implementing the test execution changes we tried three different approaches before we were able to get tests executing stably with a single script. However, we didn’t just throw away what we attempted the first two attempts. We used those failures and challenges as learning moments and built off of the issues found with both of those implementations.
What allowed us to be successful in this automated transformation was doing things the Duo way, highlighted by two core principles: collaborate often and don’t let the perfect get in the way of the good. Those two guiding principles helped us push back against a pass/fail mentality and ultimately helped us deliver incremental improvements that added value along the way.
Collaboration and iteration helped us bring in the right voices and insights to guide our project and helped us know how to parcel out the work needed in order to be successful.
For your next testing overhaul or major feature change don’t think about the project as a point A to point B journey, but rather as a journey with multiple signposts along the way and tons of helpful guides to assist you getting there. Or to put simply, do it the Duo way.
To learn more about the Duo approach to quality, check out our blog on Quality Metrics or if you would like to implement the Duo approach firsthand apply for the open position on our Platform Quality Engineering Team here. And if you want to learn more about the Windows integration we were testing, check out the documentation here.
Try Duo For Free
See how easy it is to get started with Duo and secure your workforce, from anywhere and on any device with our free 30-day trial.