Why Am I Writing This Test?

Billy Korando Development Technologies, Effective Automated Testing With Spring Series, Testing Leave a Comment

Attention: The following article was published over 6 years ago, and the information provided may be aged or outdated. Please keep that in mind as you read the post.

This article is part of a blog series on automated testing in promotion for my new Pluralsight course Effective Automated Testing with Spring.

A habit I have picked up over the last year is listening to programming podcasts. It has helped me keep current with trends in the industry. Plus: a benefit of the podcast format is it allows for deep discussions with some of the thought leaders in our field.

Of the programming podcasts I listen to, one of my favorites is Greater Than Code (GTC). The hosts do a great job bringing in guests and asking questions, often getting into conversations that lead to many interesting insights. On a recent episode, GTC hosted Dan North. Dan North developed Behavior-driven Development and so unsurprisingly much of this episode was devoted to automated testing, a subject that I have a great deal of interest in.

The highlight of this episode for me was when Dan laid out the three major concerns of automated tests. I hadn’t previously heard all the major purposes for automated testing laid out in such a succinct fashion. They are paraphrased:

  1. Using Tests to Specify the Requirements of the System
  2. Using Tests to Document the System
  3. Using Tests to Build Confidence in the System

With purpose in mind, it is good practice for both developers and automated testers to ask themselves the following questions when writing a test: Why am I writing this test? Am I specifying system requirements? Documenting system behavior? Building confidence in the system? I’m a firm believer that asking the right questions when writing tests can lead to a better design for individual tests, in addition to more coherent and effective automated test suites.

In this article, we look into the three major purposes for writing automated unit tests. We discuss how they should be approached and what developers and automated testers can do right now to establish better, more purposeful, practices. 

Using Tests to Specify the Requirements of the System

The first purpose of automated testing Dan covered was using tests to specify system requirements.

This concern really touches on Dan’s contribution with BDD. I would recommend reading Dan’s article Introducing BDD to get a better idea of this concern. Dan does give a great short description during the podcast that should be providing guide rails to drive development.

Another important point Dan makes about these tests is that he doesn’t so much view them as tests, but a specification. I think this is a subtle, but important distinction. We will look more at why Dan makes this distinction under the section Using Tests to Build Confidence in the System.

Using Tests to Document the System

The second concern of automated testing is to serve as documentation. This is a concern I have gotten a much deeper appreciation for over the past year as I have spent time reading up on automated testing best practices.

What makes automated testing so effective as documentation versus other types of documentation is that it is “living documentation.” Every time the test suite is run, the tests, the documentation, is checked to for validity. If a test passes, then the documentation is still valid; if a test fails, well then some documentation needs to be updated.

Keeping in mind this concern that tests will also serve as documentation can encourage writing tests with readability in mind. Tests that are easy to read can help other developers and automated testers debug tests when they break. They can then either update them to match the new requirements of the system or to remove the test as it is no longer valid.

I have touched on tests as documentation a few time in my blog series, such as with my previous blog article, where I look at using AssertJ for writing assertions in tests. I plan on returning to this subject in future blog articles as well, so stay tuned.

Using Tests to Build Confidence in the System

Out of the three concern Dan lays out, I personally found this one to be the most profound.

While learning about automated testing, I struggled with an internal conflict regarding the difference in responsibilities of developers and automated tester. Part of me felt that a good developer largely wouldn’t require an automated tester, or that a good developer is a good automated tester.

However, listening to Dan and the hosts tease apart this concern really helped clarify it for me. There is a real and meaningful distinction between the responsibilities of developers and automated testers. And, well, it’s in their names: developers are developing an application and the automated tester is testing it!

Okay so that sounded a bit banal, let me explain. When discussing this concern, Dan and Jessica used the example of a currency converter to explain the distinction. When a developer writes out a currency converter, the tests she will be primarily concerned with writing are validating the functional requirements of the converter. (Such as: if a user provides X amount in currency Y, how will that convert to currency Z?) These tests will ensure the functional requirements of a system are being met, but these tests alone likely won’t provide complete confidence the system will behave appropriately in production.

“Developers write tests to specify the requirements of a system, automated testers write tests to build confidence to stakeholders the system will work.”

Here is where an automated tester would step in to write tests covering the non-functional concerns of the system. Examples of these are edge cases around user behavior, concurrency and load performance, security, etc. Dan describes these very much as tests instead of specification, as they are written to build confidence in the system to interested stakeholders, business partners, security team members, the ops team, and so on.

So, to summarize, developers write tests to specify the requirements of a system, automated testers write tests to build confidence to stakeholders the system will work.

Closing Thoughts

In my experience, automated testing hasn’t been treated with near the amount of care and importance it should be. I believe the major underlying cause of this is that organizations often measure the quality of a test suite by code coverage or test count.

In practice, these quality metrics lead to a negative feedback loop. The tests being written don’t necessarily line up with the functional requirements of a system, aren’t readable and leaves the test suite with large gaps in coverage. This leaves the stakeholder with little confidence that a build that passes all of its tests will function appropriately in production.

Changing an organization’s culture around automated testing is a long process. But, as developers and automated testers, there are things we can do right now.

When writing automated tests, ask yourself questions. Why are you writing the test? How does it address the three purposes; specifying system requirements, providing documentation, and building confidence in the system?

If you don’t think a test addresses these concerns, then it might be good to rethink the design of the test or if the test is even still needed.

Automated Testing Series

  1. Without Automated Testing You Are Building Legacy
  2. Four Common Mistakes That Make Automated Testing More Difficult
  3. Encouraging Good Behavior with JUnit 5 Test Interfaces
  4. Conditionally Disabling and Filtering Tests in JUnit 5
  5. What’s New in JUnit 5.1
  6. Fluent Assertions with AssertJ
  7. This post – > Why Am I Writing This Test?
  8. What’s New in JUnit 5.2
0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments