It is a pain in the ass to build, maintain, and run automated tests. So, why do we do it?
However, while automated tests will tell you if you broke something that used to work, they are not very good at finding bugs in new features. Bugs usually come from a lack of understanding or perspective that will also be present in the tests. For example, I once worked with an organization that is a well-known proponent of test driven development. They wrote tests for every bit of new code, and ran them in the a local development environment. But this code was just as likely to show bugs in new features as code written without tests. In one case, the software only worked with a particular Web browser used in the local test environment (Safari), and it didn't work for actual users. After this was fixed, the tests became more useful, because they verified that the feature would continue to work.
An automated test is a script that looks for errors. It runs some of your code, and tells you whether it works as expected, or throws an error. We call it "continuous integration" when we set up servers to run automated tests frequently.
Building and maintaining automated tests is a lot of work. To get a good return on investment, use these measures of efficiency to evaluate your testing:
You don't want to look stupid in front of your engineers or give them extra work. Invest wisely by understanding why you are building tests, finding real problems, and avoiding false alarms.