Adopting Agile Testing Practices

This case study can be considered the result of adopting the Scrum framework without applying the suggested Agile testing practices and the impact after actual embracing them. Below you can find issues that were found and the practices that helped to overcome those issues. The main changes where the formation of cross functional teams and the use of test automations. From what it seems, we made one of the most common mistakes as a new Agile team that previously worked with incremental methodologies and tried to use Scrum.

After creating multiple projects of different scope, size and technologies, the problem was visible when we went to enterprise level with a platform of 14 different applications, with more than 300K lines of code and near to 10k unit tests. This platform was about to go live but it should be stabilized first. Trying to add functionality on the platform we got in to a spiral of massive bug fixing rounds that worked as an alert, informing us that something did not work as it was supposed to. By this spiral effect it was obvious that something went wrong with the processes that we followed.

How we worked

Working with Agile methodologies is about building small increments in small iterations, one to four weeks top. At that time, besides the XP engineering practices, the team had adapted partially the Scrum framework by using some of the events such as sprint planning, daily scrum and sprints, but the collaboration between the testing and the development seemed more like a mini waterfall. The developers build a complete functionality (user story) or even a whole sprint and the test team was then start sprinting in order to send feedback as soon possible. The multiple bugs that were produced were given back to development, sometimes even after the sprint completion, to avoid the disturbance of the sprint. The defect backlog at that time seemed more like a fridge than anything else. Working in this way drove us to increase the technical dept and in the distant future there was a very big chance that this could have devastating consequences, driving us in a software entropy.

Issue #1 – Defects

The obvious was the plethora of defects that was gathered after each run of test and which could take many months in order to be solved. After the pass of two to three sprints, an additional sprint was added, dedicated to bug fixing.

bugs
Bugs found vs bugs fixed per month

Side effects

The releases were out of track in order to fix as many bugs as possible or at least all the bugs that had meaning on being fixed.

Another side effect of this was the variety and number of defects that was driving the development to make changes (fixes) in a wide range of different projects, thus creating even more instability. This round of fixes was often ended by full tests of several affected applications before the release. The case seemed to be like the image below; every feedback drove the development or the QA out of track, with bigger delays in every round.

rounds
Pre-release overburn

Preparation

Defect backlog grooming. The team reviewed all the defects and removed all those that were not in scope or were obsolete.

After running a last full test as a whole team for a two weeks period in order to test the entire platform, we embraced a simplistic and direct approach of continuous feedback (bugs found -> bugs fixed -> bugs reviewed) in order to close all the last found defects.

First Step – Definition of Done

“Defining done makes the work/steps obvious and therefore we gain awareness…”

It was unavoidable to start reviewing the processes that we used trying to understand what was wrong and redefine them, in order to go forward. One of the main issues was that we were creating increments that in the end shouldn’t be marked as “Done” taking in mind the actual definition that we came up with.

Second Step – Bring Testing Close To Development

“Test fast, test early, reduces cost and risk“

By rephrasing the “fail fast, fail early” it seems that it could also be applied here. This was actually the trigger in order to form cross functional teams and use the Agile testing practices that helped us to be more proactive and find issues and defects in a better pace, while features were developed, and in that way to fix them with the minimum cost. As a result, there were no big rounds of testing that would lead to even more feedback for fixes that would return batches of fixed code which would eventually break something else, because the scope of those test-fix rounds were in the entire platform and someone should retest the whole platform again… and so on.

Combining a clear formulated definition of “Done” along with the formation of a cross functional team, it was the key not only to fix the defects by resolving them while they arise, but also to keep the technical dept in a level that could be maintained.

The graph below displays the concept that we adopted trying to bring testing closer to development (see 4.a, 4.b).

test_close_to_dev.jpg
Bring testing close to development

Third Step – Bring Testing Even Closer To Development

Bringing testing a step closer was not enough. It was helpful but we needed something more; we should somehow reduce the delay of the cycle “bugs found” -> “bugs fixed” -> “bugs reviewed” in order to speed up production. After taking a short time observing and monitoring the adjustments, we came up with a solution that would immediately reduce the bugs and also would train the developers in order to gain a tester point of view.

There was only one way to avoid this delay by not adding bugs. What we actual did was very simple. Before a developer was ready to commit a functionality and marked it as “Developed“, the same developer and a tester together reviewed the functionality for five minutes and kept notes for obvious and possible issues that the developer should fix/modify before committing the code. This small sort of “pair testing” not only saved us a lot of time but also, in the long run, trained the developers by having a wider perspective when testing a feature that was just developed (see 4.).

even-closer-to-development-copy
Bring testing even closer

Issue #2 – Creating Increments on a Large Pile of Other Increments

Scrum teams are delivering products iteratively and incrementally. When you are dealing with an enterprise application in which many of the components are cross related and everything can break in a matter of a single commit, the risk of changing a small piece of code should be considered twice. Adding more increments meant that a full test should run on 60% of the cases, a full test that could take up to two weeks and the results that would come up would be given to development as feedback, in a different iteration to be fixed. The whole platform seemed to be like an unstable upside down pyramid of increments, ready to collapse with every change.

unstableincrements
Unstable increments due to lack of automation testing

Fourth Step – Test Automations

“Shield functionality with automated tests, increases teams speed…”

Each Increment is additive to all prior Increments and thoroughly tested, ensuring that all Increments work together. This was a major challenge for the QA engineers that were not fully skilled, but it was overcome by involving the developers, contributing with tools and knowledge to the automation effort, as a typical cross functional team.

stableincementtestauto
Test automations validates all previous increments

Fifth Step – Make Changes In Scope

Making modifications or fixes on the whole platform at once is not the solution. This causes more disturbances in the long run than fixing something. It is preferable to just focus on the scope of the sprint and use the sprint time in order to make changes that are related to the sprint’s scope, or even the application that is related with.

Results

The results came within three months with six completed sprints and it was obvious that not all of the steps that were followed had the same impact.

Let’s start from the bugs as the initial issue. After applying all the above, up to 40% of the completed user stories had zero bugs, the bugs were closed on the time they were found and by combining those two facts, after every sprint there was no remaining known defects. The graph below shows the reduction of the bugs per month. This was expected to happen considering that many of the issues were found after a five minute pair review before the feature was even committed.

resultsbugspermonth
Bugs per Month

In order to understand the impact, we must also see how the entire production line was affected reducing the time between the phases/steps as defined on “definition of done”. The two major steps the “developed” (a feature is completed) and the “accepted” (testing results and automations have been applied) had less delay time reduced up to 25%, while the “testing” (pending for testing after completing a feature) was increased due to the quick flow of the features developed.

resultsdelayperstepreduction
Step Delay per Hour

Finally, the overall work time per step per complete feature (user story) was reduced as shown below. The average development phase was reduced from 45,8 hours to 27,4 and the testing phase was reduced from 53,4 hours to 39,5 hours.

resultscumulative
Cumulative per Month per Step

Thanks to the “pair review” we were able to produce complete user stories in less time, with less technical debt and in many cases without having a single defect.