Background
- Validate the Master Controller
- Guard against regressions in in the Master Controller
- Validate system interactions
- Allow us to develop functionality independent from schedules/delivery of new functionality in the View and Model/Model Controller components
Strategy
It is always a good idea to discuss and document your test strategy. It was even more important for this project, as the other developers on the team were not as familiar with standard automated test tenets, coming from environments where automated test was not a priority. I went about this by performing the following steps:- Develop and document the proposed strategy on internal project wiki
- Gain team buyin
- Implement integration test framework
- Create tests exemplifying usage for a variety of different types of modules
- Provide training
Unit Test
Do
- Test each module in isolation
- Test edge conditions
- bad or null inputs
- etc
- Keep tests fast
- Ideally milliseconds
- Keep cyclomatic complexity low
Don't
- Allow timing dependencies other than timeouts
- No sleeps/timers
- Span threads in one test
- Depend on other tests or order of execution
- Leave artifacts
- Use @After methods (jUnit) to ensure that artifact cleanup will happen independent of test failure
- Handle external dependencies easily
- Good unit tests only test the module under test, none of its dependencies
- Mock responses from these dependencies
- Verify the method calls on these dependencies, including
- Parameters passed
- With stock matching algorithms or custom validators
- Number of invocations
- Verify that method that shouldn't be called are not called
- Mimic asynchronous callbacks from mocked objects
PowerMock provided us with the key additional capability to mock static method calls.
Integration Test
Although unit testing covers the bulk of the test for the product, it is also useful to validate interactions between the components. In the next post I will describe the integration test strategy I implemented for this project in detail.
System Test
CI
For tests to be effective, of course, they need to actually be run frequently. Ideally developers would all run the unit test suite before checking in, but this is not enforceable. We had a Jenkins CI environment, where we setup automated tests running on a fixed interval whenever code changes were checked in. Failures were reported and logged and emailed to the team for resolution. We also tied in a code coverage tool for reporting progress against our goals.
TDD
- Create your interfaces
- Create your tests to these interfaces
- Implement your code
- Run your tests
- Rinse/repeat as necessary until all tests are green (pass)
Some of the benefits of TDD are:
- Enforces up-front accurate requirements and up-front interface design
- Improves code readability, interface design, architecture, quality (clearly much less likely to make untestable code :))
- Ensures that tests don't fall behind implementation
As part of this project I adopted a TDD approach, although found it difficult to adopt whole-heartedly. I found "concurrent" test/development a better fit rather than strict adherance to the recipe. It definitely took longer to take a TDD approach than it would have to simply perform the code, but no longer than it would have to develop the code and then the tests later. I advocated for similar approaches by other team members as part of the team training, but we did not enforce it.
Up Next
As already mentioned, my next post will delve into the integration test strategy and methodology that I adopted for the team on this project.