The conundrum
In virtually all software engineering organizations I have worked, management is aware of the benefits and importance of a comprehensive automated software test suite and the pitfalls of scrimping on test. Yet, it's actual application varies widely across projects and organizations. Often I find that automated test is adopted to some extent, but with insufficient rigor, thought, and attention. Occasionally I have seen teams that don't do automated test at all. In almost every team I have worked, when there is a crunch to get something done, testing rigor is relaxed, dropped outright, or at least postponed until after the crunch is over.
Clearly there are pain points in the process which is leading to this situation.
Pain Points
- External schedule constraints
- Deadlines, deadlines, deadlines
- Customer demo tomorrow
- Emergency of the day
Schedule constraints are often be the biggest reason for reduced rigor in testing methodology. It is especially difficult when they are immutable deadlines coming from external sources such as:
- Software delivery to a hardware devices or systems whose release schedules are wholly dictated by hardware availability
- For example, when working on pre-installed app for a phone, the apps usually either make the delivery date or are pulled
- Challenge of quantifying the cost-benefit trade-off
- How do we know that we aren't spending more than we are getting in return?
- Test maintenance burden
- Difficulty of retrofitting tests into a legacy code base
- Up-front cost
- Test Framework identification
- Developer training
- Developer mindset shift
- CI integration
- Difficulty in getting "valuable" test metrics
Addressing the Pain
- External schedule constraints
One strategy to deal with a situation where there is insufficient remaining time to both deliver the software and a full set of developer automated tests is to:
- Identify all the required tests and add them to the backlog
- Deliver as many of the most critical tests as possible
- Supplement the reduced set of automated tests with a more extensive QA test (both manual and automated) for the immediate deliverable
- Schedule the remaining unfinished tests for delivery in the following sprint, as soon as the “fire drill” is over
In some especially reactive environments it can be challenging to break out of this “fire drill” mode, as they cascade upon each other. This could be a sign of issues that need to be addressed at the management level,including:
- Unwillingness to turn away new business
- Unrealistic expectations
- Lack of understanding of the impact of these decisions
- Insufficient development resources
It is a responsibility of the senior members of a developer team to point this out to management and help work out a plan to remediate the issue. Depending on team charter and business conditions this could be a very difficult problem to solve and isn’t necessarily a management failure. Developers should take an active role in sharing the responsibility associated with improving the process and steering the process back to sanity.
Even in the most aggressive environments, there will always be “some” down-time, which could be dedicated to catching up on test. This is a good time to advocate for sprints focusing on automated test get-well.
- Quantifying the cost-benefit trade-off
Perhaps the best way to get upper-management support for test is to demonstrate that the cost in delivery schedule and engineering resource consumption is outweighed by the benefit. This can be challenging to quantify, to say the least. Metrics such as:
- Severity and frequency of bug reports
- Savings in support development time
- Savings in refactoring time
- Increased revenue due to release of a higher-quality product
- Less need for refactoring
are difficult enough to measure when you have the data. But, of course you can’t have internal data until you have a well-tested code-base to compare against, so it’s a bit of a chicken and the egg. One approach would be to present these benefits qualitatively, not quantitatively and reference cost-benefit tradeoffs published by other companies who have made this investment.
There is a severe penalty for “catching a code-base up” so the benefit is greater when applied at the beginning of a project.
There is clearly overhead associated with maintaining tests. The amount of this overhead can vary dramatically on factors such as:
- Proper test scoping
- Test repeatability
- Test complexity and supportability
When the tests are properly scoped, repeatable, and straightforward maintenance cost is not so great. Systemic violation of one or more of these factors can easily push the maintenance cost so high that the cost exceeds the benefit. Sometimes a developer or manager will have had past experience working in environments that have embraced automated test, but improperly applied some of these constraints. This can lead to a belief that automated test “is not worth it”. It can be very difficult to challenge a belief system when it is based on experience! It could help, when encountering someone who doesn’t believe in automated developer test to push into the experiences they had, understand where it may have failed, and explain how it might have worked better if applied differently.
- Difficulty of retrofitting into a legacy code base
The value of adding good test coverage to a legacy code base is less and the cost greater than if it were applied from the beginning of the project. However, in such environments it may still be valuable when applied judiciously and iteratively in small chunks. For example, before refactoring a bit of buggy, complex, and/or obdurate code, it is helpful to provide strong test coverage for the methods in question and use this to validate the refactored code. Similarly, any time that new functionality is added, good test coverage for that new functionality can be easily justified. Finally, there is always “some” down time in a project, which can be used to bolster tests. I like to target areas of the code that are:
- Most problematic (highest bug reports)
- Functionally critical
- Core (used by many components)
- Most complex
- Most likely to change
While there are initial costs associated with identifying and implementing a test framework, integrating with CI, and training developers, these costs are manageable and largely scale across multiple projects.
Next up
In upcoming posts I will discuss an automated test strategy that I advocated and adopted for a recent project.