Scoping Test Automation Part 1: In Development

A distinction between test automation during development and post-release.

Published: 2017-03-29 19:02:00

Automation encompasses a lot (I actually include dishwashers in this category)! Even under the specialisation of Test Automation, it can still cover a lot. I often feel people don't quite see an important distinction between test automation during development and test automation post-release / deployment. These two scopes have different problems to solve. The types of scenario, the input and validation points of a test, where the test physical resides and how it runs are all very different.

I'm not going to go into methodologies here, this distinction is agnostic of most software development methodologies from the past 20 years. Also, this is not to discount the value of any manual testing, acceptance or otherwise. The role of the tester (not test) also varies between these different scopes.

Test Automation during Development

The system under test, be it a stand-alone application, website or a bank's entire global payment system, is still in development. Features of it are being built incrementally by developers, each working on their own version of the code base. Eventually they will need commit to their version of the code containing their changes and merge it to a central repository. Over time, the code in that central repository changes a lot and quickly we'll lose sight of whether it all works as one or not.

Sure enough, developers will have written unit tests around their code. Before committing their changes, they will have run those tests to ensure they still all pass. But the focus of these unit tests is quite minute. A new feature might require several files of code (classes, scripts etc) and a unit test would focus only on the functions within one of those files. What would be ideal is another class of test with a wider focus that could test a feature as a whole.

There are a number of frameworks available that support automated feature testing within the code base (Cucumber is one such popular choice). They allow someone to define a functional scenario in a plain text script that a part of the code base is supposed to be able to perform. That script is then parsed, still within the code base, and executed in order to validate how the declared inputs act upon the code. An ideal being that a set of these tests would be enough to define a whole feature.

The automation aspect of this is in the being able to parse that plain text script and convert it into a series of steps that invoke parts of the code base. The script would define the prerequisite conditions, what inputs need to be passed in, what action is to be performed and finally what the expected end state should be. These steps all act upon static parts of the code base.

This does not require the code to be built and deployed to any specialised test environment. The only requirement is that code compiles. Therefore, as with the unit tests, any developer can also have a set of automated feature tests to run within their version of the code base. This provides several benefits, over and above the unit tests:

  • If these tests pass, we can validate a feature has been built and exists in the code base
  • If these tests pass, we can validate changes to the code base have not broken existing features

But also, if these tests are run regularly and something fails, we should be able to detect what the last change was that broke the feature. This "early and often" feedback loop means issues can be addressed quickly and re-running those tests again will confirm the fix.

I'll end this here, perhaps a little too soon, as the next step next logical point would be to show what part these tests play in a Continuous Integration environment and I believe that goes a little beyond the Development scope.