ATDD and Effective Exploratory Testing

By Alex Yakyma.

Acceptance Test-Driven Development heavily relies on automated scenarios that reflect the key behaviors of the system. Basically it represents the way and the team process to physically bind the requirements, expressed as acceptance tests, to the code. Besides these key acceptance tests, or in other words – examples of system functionality – teams normally also want to ensure that certain less obvious and less predictable behaviors are implemented correctly. "Less predictable" and "less obvious" does not mean that they are less probable. In fact, teams, by definition, have their "implementer bias" and may easily overlook things. That is the reason for exploratory testing – to look at the system from a new angle, to try scenarios that have not been tried before, to play with different combinations of environment and configuration settings, so on and so forth. This important effort involves both creativity on the part of testers and their hard work in running desirable scenarios under certain conditions. The process most often entails one more of the following problems:

• It consumes a lot of the testers' time. Very often they perform all or almost all operations manually, which, apart from the execution of desirable scenarios, also includes manual environment setup, data preparation as part of different upstream scenarios, etc.

• It takes developers a lot of extra time in the case that they create and maintain special harnesses for manual testing.

• The tests are not persistent. Even when the team performs some kind of automation, typically there is a considerable gap between the process of exploratory testing and the automation of those valuable new test scenarios obtained as a result of exploration. The gap is both in terms of time and skillset due to the fact that exploratory testing is often performed by manual testers while, most often, automation is done by developers or automated test engineers.

• Testers' options are mostly constrained to black box testing. System-level scenarios result from a combination of different conditional flows that occur in different parts of the code. The problem is that it is impossible to test the system by only operating at a high level – we simply experience a "combinatorial explosion" – a stratospheric number of scenarios.

 • The knowledge and context of sharing with developers is ineffective. While testers may play with certain type of system behaviors and even catch some interesting defects, there is almost no sharing of their tacit knowledge about the system with the developers, no collaborative analytical thinking, and no generalization or synthesis. There is no way to induce the specific useful implementation or design takeaways from the findings discovered during exploratory testing.

ATDD turns out to be very useful in creating certain behaviors and effects that help to address these problems. The secret is in the team's ability to quickly automate new scenarios, in the tooling, that turns out to be extremely useful, and in constant collaboration that drives ATDD process. Here is how it works.

ATDD relies on automation tools like Cucumber, SpecFlow, FIT, RobotFramework etc., which allow for defining scenarios in business language. These scenarios are then physically linked to the system by what is called "bindings" or "scenario step definitions" – little pieces of code that interpret the corresponding scenario steps, and run them against the system. Very often these steps are parameterized, so that, for example, this would be a step written in Gherkin language, which is used in tools like SpecFlow and Cucumber: "customer provides , and and whether this a permanent address". This enables testers to do exploratory testing at a whole new level: 

• Testers use the same tools as developers to "play with scenarios". The only difference is that they only play with the input parameters of an existing scenario steps, which does not require any coding. 

• When testers need a new step for exploratory testing that does not yet exist, they ask developers to create it. They often pair with developers to make sure that the right steps are built. 

 • When binding new key scenarios to the code, which happens as a part of the normal ATDD flow, testers and developers ensure that steps are properly parameterized so that testers could immediately use them for exploratory testing, apart from their primary function. Sometimes, however, teams decide to hide some hairy detail in the step bindings (usually for better readability). In such cases, testers may ask developers to simply create a "parameterized" version of the steps, which will be used primarily for exploratory testing. When the initial binding code is in place, this becomes a very quick operation. 

• By easily combining and running different steps with different input/output parameters, testers can more effectively capture unexpected outcomes. Here is the fun part: every finding can be captured as an automated test by simply replicating the set of steps with the specific set of parameters into a separate scenario file. 

 • Frequent collaboration between developers and testers flows very naturally this way. Developers immediately learn what is wrong with the system and testers learn to understand and to be able to read the binding code by looking behind the developer's shoulder and operate much better in terms of "white box" or at least "grey box" testing. There is nothing sophisticated about the binding code. In fact, it has to be fairly simple by definition - those who write automated scenarios leverage simplicity in order to prevent false positive or false negative scenarios. It is really that simple! 

After an iteration or two, this approach becomes so obvious and natural to testers that all they had normally done previously, in terms of exploratory testing, seems ridiculously inefficient to them. This approach allows them to explore more, but also to capture findings so they could be run over and over again. It helps testers externalize valuable knowledge and turn it into better software as a result of collaboration with developers.


  1. Hi, this is Johnson from Chennai. I have read your blog. It’s very informative and useful blog. You have done really great job. Keep update your blog. Selenium Training in Chennai

  2. Most popular, industries-recognize certification that required complete Selenium training course & hand-on experience in
    Selenium Training in Chennai

  3. Most popular, industries-recognize certification that required complete Selenium training course & hand-on experience in
    Selenium Training in Chennai

  4. Game online a lot of people who love to help them relieve stress of the moment, such as TinyTanks and TinyPlanes or Tiny Planes Beta is the fighting game with a pen or pencil, you have to destroy opponents before they kill you. Or Run 3 Unblocked and Hacked unblocked Gamesare skillful game controller for bears to dance to the goal without falling off a cliff. I wish you happy gaming!

  5. You shared useful post. Thanks for sharing such a useful post.

    web design training in chennai

  6. Great article. Glad to find your blog. Thanks for sharing.

    digital marketing training in chennai

  7. Excellent post. I like your post. Learned some new information. Thanks for sharing.

    digital marketing institute

  8. This blog is so nice to me. I will continue to come here again and again. Visit my link as well. Good luck
    obat aborsi
    cara menggugurkan kandungan

  9. Wonderful post. Thanks for sharing such a useful post.

    selenium training in chennai

  10. Thanks for writing a very informative post.
    iklan gratis