Flattening the Test Pyramid

Mike Cohn introduced the concept of the Test Pyramid in his book Succeeding with Agile back in 2009. In the 10+ years since the Test Pyramid along with a multitude of its variants has established itself fairly well in the Software Quality Assurance (SQA) landscape. It has become the metaphorical kindling for any team setting out to define its SQA strategy – perhaps inevitably the first scribble on the whiteboard. Admittedly, it does give one a great start!

 

 

(Image courtesy of: https://martinfowler.com/articles/practical-test-pyramid.html)

 

Around the dawn of 2019, when rethinking our SQA strategy at OpenGov, we also started there. We had an amazing quality crew back then, which had laid a solid foundation for us to build upon. But we were headed towards a rather significant qualitative expansion of our SQA strategy and quantitative expansion of the team to realize that strategy. As is with much else that we do in Engineering at OpenGov, we guided ourselves with one of our first principles: Unification.

We had a number of autonomous teams building a number of SaaS applications using a fairly rich tech stack. And, we were gearing up for tremendous growth in our application portfolio, both by in-house innovation and by acquisition. A unified and proven engineering and operational methodology is fairly important for the successful integration of acquired technology and immersion of the engineers who built that technology.

18 months later, and having seen our approach working rather well, there is something to be shared in the form of high-level learnings and blueprints that others out there might find useful for their journey in SQA. In this article I will cover three specific aspects: 1) how (and why) we organized, built, and scaled the team, 2) the vision with which we led the team and grew its charter in a manner that is friendly to the pace and maturity of the different product development groups, and 3) the tech stack that this team created to address OpenGov’s SQA needs. We feel well positioned, but there’s still a lot to explore, discover, and act on as OpenGov emerges in the ERP space. Our SQA crew will continue to share details (such as this) from our ventures – follow us here!

 

The Team

In the traditional sense of the word Team, we did not want to build an SQA Team. Instead, we wanted to build an SQA Guild. The word ‘Guild’ is defined as an association of people who oversee the practice of their craft in a particular area for mutual aid or the pursuit of a common goal. More specifically, we wanted to build a well-rounded SQA Guild that fits well in the framework of our autonomous engineering teams without losing the spirit of being an association that is pursuing the common goal of maintaining the highest standards in quality of OpenGov’s applications. This is how we achieved our intent:

  • Each autonomous engineering team has a Lead Software Development Engineer in Test (SDET). This person is responsible for planning the overall test strategy for that team’s applications and overseeing the day-to-day execution.
  • The SDET is supported by additional Quality Engineers (QEs) that belong to that autonomous engineering team. These supplementary engineers include SDETs, test automation generalists, domain (e.g. UI, API) automation specialists, and manual testing specialists. This is where we qualitatively and quantitatively tailor to the needs of the autonomous team based on the general nature and complexity of the applications it owns, the speed at which it operates, and the maturity of its SQA operations.
  • Finally, and this is of vital importance in my view – we have an Architect in Test. This person does not belong to any one autonomous team. At OpenGov we have this role in the Engineering Operations team, but this person can equally well succeed in an Architects team. The duo of this Architect in Test and the Engineering leader of that central (operations or architect) team plays the biggest role in maintaining the spirit of the SQA guild. This duo is the glue that binds this guild together and the force that keeps it moving along a strategic vector.

Depending on the size and goals of your engineering organization, you may want to consider having more than one Architect in Test. You may consider the Architect in Test to be a domain-specialist, e.g. performance and scalability, security, reliability, or you may consider a generalist Architect in Test who can work with the other leaders in your engineering organization who themselves specialize in those domains. At OpenGov we followed this latter model.

For example, our Architect in Test is an amazing generalist who works with the engineering leaders in those specialized domains.

We encourage the rotation of QEs between the various autonomous teams, typically after a person having spent 1-2 years in one team. (In all honesty though, we are still trying to figure out the best organic way for that to happen.)

It is worth calling out that there is no “QA manager” in this structure. The Lead SDETs are the technical managers/leads, and the engineering manager of the autonomous team is the people manager of the Lead SDET and any other QEs in that team. At OpenGov we are of the opinion that the ownership of the overall delivery plan and the quality of the product/component is the responsibility of the engineering manager – in other words there is no “throwing (quality assurance) over the fence”. R&D leadership at OpenGov have seen through our past lives that organizational alignments where there’s a QA organization, separate from the development organization, whose job is to “own the quality” or even worse to police the development teams, simply fail to instill a culture of engineering quality. We have adopted this way of thinking as one of our first principles, and from this principle came the organizational alignment that is described above.

The Vision

This is where we get to flatten the pyramid, so let’s get right to it. The infographic below is the vision that we set in those early days a year and a half ago. (X-Platform refers to Cross-Platform.)

 

 

We did not have tests or frameworks for everything that is listed above, but we clearly called out the areas that we wanted our teams to be thinking about. All that is tall order, so we further classified them into “first-order” tests and “higher-order tests” as a model for the teams to prioritize the realization of our SQA strategy. The general guidance was for the teams to focus their automation efforts on the first-order tests before the higher-order tests. You will naturally find that some teams will have more coverage than others on that map, but overall, it is the charting of that map that the teams pursue. A move to cover higher-order testing is a fairly good indicator of a team’s growing maturity in SQA.

At the top, you see our suggested ownership/accountability model. A Software Development Engineer (SDE) typically writes all unit tests and some integration tests. It is the team of SDETs and QEs in an autonomous team that drives the rest of the charter, and more often than not that is done in a close collaboration with the developers. Some of our autonomous teams, especially those that have a higher level of maturity in SQA, actually find their developers using the frameworks that have been built by the QEs.

 

QualityOps

This culture, where a group of engineers uses frameworks and methodologies that have been purpose-built by other engineers, similar to the philosophy behind DevOps, saw us coining the term ‘QualityOps’ in our little world. As a result, we now have:

  • QualityOps roadmap (tended by our Architect in Test) to build and extend these frameworks,
  • QualityOps Jira board (tended by our Architect in Test & Lead SDETs) for planning and execution, and
  • QualityOps software engineering capacity from our federated QEs and developers to deliver to the plan.

 

The Creation

The infographic below shows where we are today. For rapid innovation, we opted to have a diverse set of homegrown, open-source, and commercial tools and services. Wherever required and possible, our engineers have integrated the tools and services for this to be a polyphony.

 

 

Our unit testing frameworks include JUnit for Java, Jest for JavaScript, and PHPUnit for PHP programs. We use a Java-based home-grown framework called rAPId for API testing. The engineers who focus on UI acceptance test automation use either Ruby-based Howitzer or JavaScript-based Nightwatch, with the latter slowly but surely becoming the framework of choice by our teams. BrowserStack is our cross-platform (cross-browser <> cross-OS <> cross-device) test framework. Our accessibility framework is built using Lighthouse (driven by Nightwatch). Our load tests for performance and scalability testing use Gatling. Our security engineers built a tool called TScanner (powered by Trivy) for vulnerability scanning of our binary artifacts. We also use internal and external penetration testing for application security. Ghost Inspector is actively used for customer experience monitoring. (I also recognize Ghost Inspector as a good gray box monitoring tool for our SREs.) Our Infrastructure crew uses Terratest to write automated tests for their IaC programs. Lastly, we use TestRail to document our test case specifications and Allure for test reports.

Knowing how sparse that infographic was when we imagined it 18 months ago, gives me the feeling of tremendous gratitude for everyone in our SQA guild. I am forever proud of their achievements. That brings me to…

 

Recognition

OpenGov’s Architect in Test: Pushkala Pattabhiraman; Lead SDETs: Digbijay Shrestha, Shruthi Narayanan, Shubha Alvares, & Sullivan Valaer; and our prolific QE crew: Alexander Hritsun, Dmytro Lytvynenko, Fernando Campos, Guangyun Hou, Iryna Samokhvalova, Ivan Zozuliak, Mariia Kalinichenko, Nataliia Salinko, Oleksandr Volkov, Olena Kanevska, Olena Kliuka, Olena Shyshka, Roman Kanafonskyi, Viktor Lisniak, and Vladyslav Radko.

I would also like to recognize OpenGov’s partnership with Strong QA Ltd, who helped marshal an amazing crew of quality engineers – thank you Roman Parashchenko and Andrii Kotsiuba.

If you like what you’ve read in this or other articles by our engineering team, then come and join us to ignite your potential. Keep an eye out for opportunities on https://opengov.com/careers/ or send me a message here.

Category: Technology

Related Posts

Technology
Using Kubernetes Downscaler for Time-of-Day & Day-of-Week Automation
Technology
OpenGov Test Army: Taking Quality Engineering Beyond R&D
Technology
A Monitoring, Alerting, and Notification Blueprint for SaaS Applications