As test-loving development teams, we are all painfully aware of the complexity of getting an application into the zen state of development – quick, test-driven red/green feedback for developers, software designs that are functionally on-the-money from a test-led, “outside-in” approach (from BDD), and a nigh on seamless continuous delivery process as a result. Very few teams achieve this, and those that do are frequently gifted a green-field project in which to engender them.
As test-savvy teams, when tests start to hamper the release process, we often assume our approach to testing needs an overhaul, but that might not be the case. Here we look at the role of architecture in test-driven applications, and examine whether we should listen to our tests to examine our macro design.
Listening to your tests
When you can’t release easily and repeated regression becomes an issue we often start to stare inwardly at the frameworks or tests themselves in attempt to isolate causes. Getting clarity on a testing approach, however, can be like looking at the matrix head-on. “Listen to your tests” is the XP mantra – isolate and decouple elements of your codebase so you have better control of the function of the lower level elements of your application, including the ability to design your codebase using unit tests. Problems related to coupling and software design at the pattern level are hard on us as developers, but rarely manifest themselves in regression and prevention of release.
You also can and should listen to your acceptance tests. The use of acceptance tests is a means of overcoming the micro-focus of lower level tests. Often focused specifically as the way a consumer (not necessarily user) uses your application, these should be sufficiently high level to catch any integration issues. However deciding on frequency and coverage is not easy. Acceptance tests exact a penalty in terms of time taken to run them, and baking them into environments such as CI can seriously impact the quick feedback needed at the CI level. Acceptance tests are meant to provide some safety against regression and build confidence (forget about design for a minute).
When your developers won’t run the tests because they take too long, it is definitely not working. Rather than let the tail wag the dog, it might be time to take action. There are some classic approaches for dealing with the situation where you are cowering away from your acceptance tests, mostly beyond the scope of this article, but briefly:
- Get smart about which tests you run, and how frequently
- Aim for a patchwork approach to tests – you don’t need to blindly test everything – if stuff goes wrong and the test fails, this is the BEST reason to persevere.
- Find the areas of business value, test those.
Test driven architecture
There is another way to look at it. Just because testing acceptance testing is hard, it doesn’t mean it is wrong. If you can’t set up state correctly for your tests, if you can’t?isolate a number of components and just run tests against those, you should look again at what you have made. Your tests are telling you that the application architecture is too complex, or too tightly coupled.
Your testing approach might actually be valid, but your application is untestable.
The reason we refactor and write tests at the code level using unit tests, is that breaking down our code at that level makes it testable. Dependency Injection (DI) is a great concept. This approach also works at the service level by using Micro Services. Give yourself a fighting chance by taking a granular approach, embrace cohesion of software components via interfaces,and above all, love the single responsibility approach, blow it up, and fight the urge to muddy the responsibility of your services. Your infrastructure and your architecture must be as mutable as your software frameworks.
This approach is no picnic. Tooling, pipelines and infrastructure is now 80% of your project. You need to devote more time than ever to this and you need to make or empower your development teams to absorb responsibility for their environments.