Types Of Tests

Automated tests should tell you when you broke something, while not generating false alarms and taking excessive time to adjust for intended changes.

Layering test types

You will typically be running more than one set of tests. Usually you start by monitoring your production release, or at least collecting bug reports. That's a type of test. You may add manual or automated regression testing to that so that you find problems before customers find them. Then you will add unit tests, so that anyone working on code can check it. In practice, you will add layers going backward from the production release in order to find errors closer and closer to the time when the code changed. This gives more confidence to developers. You should add test layers until you get to the quality level that you want.

While testing may seem like a big investment, you can optimize it by working these layers and finding the most efficient places to add tests. Keeping this in mind, let's evaluate some types of automated tests for their efficiency.

Input - output test

You can use an input-output test if the main function of your software is to transform input data into output data. You can configure a new test just by providing a new input/output pair. The test will check to see if the software output matches the sample output for each pair. This is the most efficient type of test, because it doesn't get any harder to maintain as you add more code.

Web Services form the foundation of today's scalable cloud systems. You can test Web services with a type of input-output test that calls the service and checks for expected return values.

UI regression test

A UI regression test uses a tool which simulates a user and analyzes the screens to make sure they match the previous output. For example, Selenium will write input to a Web application and check the screens that come back.

In systems where the UI is important, UI regression tests are not efficient. UIs tend to change frequently, so you spend a lot of time fixing the tests. You may need to use this type of test when your programming framework doesn't support other types of tests, or if you have a lot of old and static features to regression test. It has the advantage that it can have a graphical UI and be maintained by QA people who are not part of the core programming team.

A full regression test runs through all of the application functionality to make sure that you did not break something that seems unrelated to your changes. Full regression tests are not very efficient, and they take a long time to run. If you want to release frequently or continuously, you should not require a full regression test. If you think you need a full regression tests to prevent unexpected bugs, you should add more test layers until you do not need the full regression test.

Unit test

A unit test is a script that exercises a specific code object by initializing it, calling methods or functions, and checking the return values. Programmers write and run these tests locally as part of a test driven development process.

This is an efficient type of testing, and it is probably the most frequently used. A unit test must be built by programmers. A good programming framework will contain a structure for unit tests. If you have a good framework, it will be easy for a programmer to build a unit test, and often programmers build unit tests as part of a "test driven development" process. If you make changes to an object, it is fairly easy to find the related tests.

Most teams that use continuous integration ask the person who develops a feature or fixes a bug to also make related unit tests. They often use code review to check that these tests have been created and run.

Integration test

An integration test is a code level script (i.e., one that doesn't run the normal UI) that tests a complete process involving multiple objects. For instance, you might make a test for "submit an order" which checks to see if the inventory database is updated, if the invoice is correct, and if the person placing the order gets the right confirmation email.

An integration test requires more work to build because you have to set up more data, and you might need a mockup database. However, there are frameworks like Cucumber that walk you through the process. Integration tests have a medium level of efficiency, and you should build them for important actions if your programming framework supports them.

Service integration tests are important for managing large projects. Service integration tests run the complete app, including calls to multiple Web services. They ofen run on a centralized test system that has its own substantial database.

Code analysis

Code analysis tools like Sonar or Coverity will read your code and tell you if you structured your code in a way that is likely to cause bugs or security problems. This is not really a test because it doesn't find bugs. However, it is a useful layer of quality improvement. Code analysis is a very efficient type of test because it is totally automated. Your developers will respond well to a message that says "Good job friend. You increased your quality score from 2.9 to 3.6," even though it comes from a mindless script. You should fit in a code analysis layer if you can. In the future, these static code analysis tools will be enhanced to actively propose bug fixes and make other improvements.

Production logging and monitoring

All software should have an automated system for logging errors and reporting them automatically. You should also be collecting post-release user activity and system performance. You should be collecting user feedback and responding to it.

As you improve these capabilities and improve your reaction time, you will have a built-in system for eliminating defects and improving usability, performance and profitability. No matter what your pre-release testing plan, you should be trying to gather as much data as possible about the post-release performance of your software, and you should be trying to reduce the time that it takes your team to respond to this data.

This can be a very efficient type of testing. If your users can tolerate some errors, you can use it as a substitute for other tests. If your system is online, you can deploy to production before you merge to the mainline. Then, you watch the production system to make sure it works correctly. If there is a problem, you roll back to the mainline version. If it works correctly, you merge to mainline. This technique gives you a cleaner mainline that has code which has already been tested in production. It makes rollbacks simpler, and it gives the other developers a more stable mainline.