Saturday, April 2, 2016

Reinvigorating a large Android code-base



The problem

We faced a classic problem on a recent project. Stop me if you've heard this one before. We are supporting a well established product with millions of users, running on a large underlying code base, but the code base has been building technical debt for years, slowly turning into a Frankenstein, causing support problems. The types of problems we were facing include:

  • Separation of concerns violations making unit test maintenance and creation more challenging, with lower than ideal coverage
  • Threading complexity in places leading to defects that are difficult to find and repair
  • Numerous special-case customizations distributed throughout the code, each addressing specific customer needs that only activate for that customer
  • Reduction in team velocity due to code complexity and the other issues listed above

Challenges of Addressing Technical Debt

In one of my first posts I touched on the challenges of balancing time spent paying back technical debt vs time spent adding new features. It is always difficult to find opportunities to address these types of concerns in a code base because the investment of a large refactoring effort is very high and the direct customer value can be perceived as low (especially outside of engineering) and difficult to quantify when compared to ongoing targeted feature enhancements and defect fixes.

I find the best approach is usually to address refactoring in an iterative approach rather than tackling an entire code base all at once. In this fashion you can balance new feature development with the need to keep your foundation stable.

Solution Overview

On this recent project, however, we were given a task to refresh the entire UI. This was an unusual opportunity to re-evaluate the code base holistically and introduce new concepts and techniques.
We targeted three primary technologies to aid in this effort:

  • Dependency injection using Dagger2
  • Decomposition of Android activities into Model/View/Controllers
  • Usage of RxJava and retrolambda for component communication and thread management

Over the next several posts I will delve into the approach we took, how each contributed to solving the problems identified above, and some of the challenges we faced. 

Saturday, March 5, 2016

Integration Test - Case Study


In the previous post I discussed an integration-test framework that I developed for testing system  interactions with a java  master controller. Recall that this system consists of separate View, Master Controller, and Model/ModelController components.

A basic flow through the system would start with an action in the View, pass through the Master Controller to the Model/Model Controller, pass back into the Master Controller, and then terminate with responses back to the view and the model controller.

For this case I configured Dagger2 to create a mock of the manager that interacts with the view (see mocked view in the diagram).
 I used the mock to verify that the correct method and method parameters are called at the end of the flow. I also wanted to verify that the correct methods and method parameters are called on the Model Controller, but I could not mock the Model Controller because it is integral to the flow we are testing. Instead, the test spied on the model controller to verify the expected method and method parameter calls.

Test flow (from diagram):
  1. Initiate an action into the controller to mimic a user interaction from the GUI (View)
  2. The Master Controller interprets the input action as an action needing interpretation and dispatch to the Model Controller and performs these actions. I used Mockito to verify that the correct Model Controller method was called and that the correct parameters were supplied.
  3. The Model Controller interacts with the Model, processes the action and sends its own command back to the controller. I verify that the correct Master Controller method is called with the correct parameters. If the correct signature is seen, it means that the following components are working properly for the flow under test:
    1. The JNI layer for passing between the Java Master Controller and the C Model Controller for these interactions
    2. The Model Controller logic for this action 
  4. The Master Controller then interprets the action from the Model Controller, dispatches an action to the View to inform it what to display, and replies to the Model Controller with a confirmation able to successfully interpret and handle the command. Both these terminal responses are also verified.
One added complexity with testing this integration is that there are several locations where the flow crosses thread boundaries. I needed a mechanism for delaying our verifies until a callback is received. The integration test will take much longer to run than a typical unit test (on the order of 15 seconds), but this does not mean that we can get sloppy and use sleep statements. To get reliable results, sleep statements would add more delay than necessary, and could still result in tests which are not repeatable. It’s almost never a good idea to sleep in an automated test...or in any code.

Mockito provides the doAnswer() method which is very helpful for mocking asynchronous responses from an object, but this is not helpful to us because we are testing objects which we can’t mock. What we need is a way of synchronizing with the asynchronous callback, because the code under test relies on that callback completing before it can proceed. Java provides such a mechanism with the CountdownLatch. The CountdownLatch will block until its counter reaches 0. We can setup the CountdownLatch in the test thread and give it a value of 1. In the callback thread, we reduce the count to 0 to unblock the test thread when we perform the callback, allowing it to proceed with its verification. We attach a timeout to the latch in case the callback is never hit (15 seconds in this case). This is much better than a sleep statement because the timeout condition is only hit on a failure condition. For this reason we can choose a large number that we know will pass under all conditions, without actually delaying the test execution time except for the exceptional condition of an actual failing test. 



This solution is still not ideal. We had to instrument the test with a handler that overrides the implementation we are testing to call the implementation’s body and then countdown the latch. I could have used DI to inject this test handler, but in this case the framework already had a mechanism for registering handlers directly, so we were able to register the test handler this way.

Sunday, February 7, 2016

Developing an Integration Test Framework - Case Study


In the last two posts I described a test strategy and unit tests for a controller written mostly in java. The Master Controller interfaces with an http server hosting the View and a Model/Model Controller written in C. This Model/Model Controller was developed by a different software group and is like a black box for our purposes. I wanted the capability to test both our C/JNI layer and also test drops of the Model/Model Controller from the other software team. Integration tests provided me with the capability to accomplish both these goals, while also testing interactions between the modules unit-tested in isolation. I could test these interactions from the Master Controller with java test frameworks by observing expected system responses to stimuli, removing the need for an entirely separate C-based test framework. 

The Master Controller code splits its core functionality between "managers" with specific responsibilities, 
including handling boundary crossings to the View and to the Model/Model Controller. Each manager is housed in a container class. This is an excellent spot to introduce dependency injection for test components, as shown in the ManagerControllerSample.java below: 

I created a test framework that allowed for each integration test to determine for each manager whether it would be implemented as a mock, using the standard implementation, or using a spied version of the standard implementation. With Dagger2, I was able to inject the appropriate component (test or production) at run-time. In Graph.java we see the dagger graph. IntegrationTestBase.java  specifies the list of managers to mock and spy as seen in the call to initGraph(), and ManagerTestDataModule injects the appropriate manager type (standard, mock, or spy):


In the next post I will discuss an integration test that used this framework and a technique for synchronizing callbacks spanning thread boundaries.

Saturday, January 9, 2016

Creating an Automated Test Strategy - Use Case



Background

On a recent project I was working on a team developing a software component as part of a larger system. This component can be thought of as a controller in a distributed MVC system, if you think of the model as having its own embedded controller. Our controller (the Master Controller) was written in Java. We had a JNI layer for interfacing with the C-based Model/Model Controller assembly and communicated with a web server hosting the view. I wanted to develop an automated test strategy that would:
  • Validate the Master Controller
  • Guard against regressions in in the Master Controller
  • Validate system interactions
  • Allow us to develop functionality independent from schedules/delivery of new functionality in the View and Model/Model Controller components
While most of this could be accomplished with unit tests, I decided we also needed some level of integration test for validating the system interactions. In the following description, bear in mind that the Master Controller is the device under test (DUT) and it interfaces with the View and Model/Model Controller but does not test them directly.

Strategy

It is always a good idea to discuss and document your test strategy. It was even more important for this project, as the other developers on the team were not as familiar with standard automated test tenets, coming from environments where automated test was not a priority. I went about this by performing the following steps:
  • Develop and document the proposed strategy on internal project wiki
  • Gain team buyin
  • Implement integration test framework
  • Create tests exemplifying usage for a variety of different types of modules
  • Provide training
Our overall strategy was to rely on unit tests for the majority of functionality test, but provide a powerful integration test environment and a small number of tests validating interactions between these components.

Unit Test

There are some widely adopted principles for creating unit tests. For the wiki and the training, I put together a few simple patterns/anti-patterns. If you are at all familiar with unit test, there will be no surprises here:

Do

  • Test each module in isolation
  • Test edge conditions
    • bad or null inputs
    • etc
  • Keep tests fast
    • Ideally milliseconds
  • Keep cyclomatic complexity low

Don't

  • Allow timing dependencies other than timeouts
    • No sleeps/timers
  • Span threads in one test
  • Depend on other tests or order of execution
  • Leave artifacts
    • Use @After methods (jUnit) to ensure that artifact cleanup will happen independent of test failure
I chose jUnit, Mockito, and PowerMock as the tools for our unit test. This decision was based on familiarity with the tools, their popularity, and suitability for usage within our system. Mockito performs most of the mocking functionality we needed. Mockito allowed us to:
  • Handle external dependencies easily
    • Good unit tests only test the module under test, none of its dependencies
  • Mock responses from these dependencies
  • Verify the method calls on these dependencies, including
    • Parameters passed
      • With stock matching algorithms or custom validators
    • Number of invocations
  • Verify that method that shouldn't be called are not called
  • Mimic asynchronous callbacks from mocked objects
PowerMock provided us with the key additional capability to mock static method calls.

Integration Test

Although unit testing covers the bulk of the test for the product, it is also useful to validate interactions between the components. In the next post I will describe the integration test strategy I implemented for this project in detail.

System Test

The unit and integration tests created and supported by the development team were only one piece of the overall validation strategy. The QA team tested the overall product manually and with Selenium for creating automated tests driven from the html5/javascript View component.

CI

For tests to be effective, of course, they need to actually be run frequently. Ideally developers would all run the unit test suite before checking in, but this is not enforceable. We had a Jenkins CI environment, where we setup automated tests running on a fixed interval whenever code changes were checked in. Failures were reported and logged and emailed to the team for resolution. We also tied in a code coverage tool for reporting progress against our goals.

TDD

For those who are not familiar with test-driven development, the basic idea is that you create your tests before you implement your code using a recipe like:

  1. Create your interfaces
  2. Create your tests to these interfaces
  3. Implement your code
  4. Run your tests
  5. Rinse/repeat as necessary until all tests are green (pass)
Some of the benefits of TDD are:
  1. Enforces up-front accurate requirements and up-front interface design
  2. Improves code readability, interface design, architecture, quality (clearly much less likely to make untestable code :))
  3. Ensures that tests don't fall behind implementation
As part of this project I adopted a TDD approach, although found it difficult to adopt whole-heartedly. I found "concurrent" test/development a better fit rather than strict adherance to the recipe. It definitely took longer to take a TDD approach than it would have to simply perform the code, but no longer than it would have to develop the code and then the tests later. I advocated for similar approaches by other team members as part of the team training, but we did not enforce it.

Up Next

As already mentioned, my next post will delve into the integration test strategy and methodology that I adopted for the team on this project.