Showing posts with label ATDD. Show all posts
Showing posts with label ATDD. Show all posts

ATDD and Effective Exploratory Testing

By Alex Yakyma.

Acceptance Test-Driven Development heavily relies on automated scenarios that reflect the key behaviors of the system. Basically it represents the way and the team process to physically bind the requirements, expressed as acceptance tests, to the code. Besides these key acceptance tests, or in other words – examples of system functionality – teams normally also want to ensure that certain less obvious and less predictable behaviors are implemented correctly. "Less predictable" and "less obvious" does not mean that they are less probable. In fact, teams, by definition, have their "implementer bias" and may easily overlook things. That is the reason for exploratory testing – to look at the system from a new angle, to try scenarios that have not been tried before, to play with different combinations of environment and configuration settings, so on and so forth. This important effort involves both creativity on the part of testers and their hard work in running desirable scenarios under certain conditions. The process most often entails one more of the following problems:

• It consumes a lot of the testers' time. Very often they perform all or almost all operations manually, which, apart from the execution of desirable scenarios, also includes manual environment setup, data preparation as part of different upstream scenarios, etc.

• It takes developers a lot of extra time in the case that they create and maintain special harnesses for manual testing.

• The tests are not persistent. Even when the team performs some kind of automation, typically there is a considerable gap between the process of exploratory testing and the automation of those valuable new test scenarios obtained as a result of exploration. The gap is both in terms of time and skillset due to the fact that exploratory testing is often performed by manual testers while, most often, automation is done by developers or automated test engineers.

• Testers' options are mostly constrained to black box testing. System-level scenarios result from a combination of different conditional flows that occur in different parts of the code. The problem is that it is impossible to test the system by only operating at a high level – we simply experience a "combinatorial explosion" – a stratospheric number of scenarios.

 • The knowledge and context of sharing with developers is ineffective. While testers may play with certain type of system behaviors and even catch some interesting defects, there is almost no sharing of their tacit knowledge about the system with the developers, no collaborative analytical thinking, and no generalization or synthesis. There is no way to induce the specific useful implementation or design takeaways from the findings discovered during exploratory testing.

ATDD turns out to be very useful in creating certain behaviors and effects that help to address these problems. The secret is in the team's ability to quickly automate new scenarios, in the tooling, that turns out to be extremely useful, and in constant collaboration that drives ATDD process. Here is how it works.



ATDD relies on automation tools like Cucumber, SpecFlow, FIT, RobotFramework etc., which allow for defining scenarios in business language. These scenarios are then physically linked to the system by what is called "bindings" or "scenario step definitions" – little pieces of code that interpret the corresponding scenario steps, and run them against the system. Very often these steps are parameterized, so that, for example, this would be a step written in Gherkin language, which is used in tools like SpecFlow and Cucumber: "customer provides , and and whether this a permanent address". This enables testers to do exploratory testing at a whole new level: 

• Testers use the same tools as developers to "play with scenarios". The only difference is that they only play with the input parameters of an existing scenario steps, which does not require any coding. 

• When testers need a new step for exploratory testing that does not yet exist, they ask developers to create it. They often pair with developers to make sure that the right steps are built. 

 • When binding new key scenarios to the code, which happens as a part of the normal ATDD flow, testers and developers ensure that steps are properly parameterized so that testers could immediately use them for exploratory testing, apart from their primary function. Sometimes, however, teams decide to hide some hairy detail in the step bindings (usually for better readability). In such cases, testers may ask developers to simply create a "parameterized" version of the steps, which will be used primarily for exploratory testing. When the initial binding code is in place, this becomes a very quick operation. 

• By easily combining and running different steps with different input/output parameters, testers can more effectively capture unexpected outcomes. Here is the fun part: every finding can be captured as an automated test by simply replicating the set of steps with the specific set of parameters into a separate scenario file. 

 • Frequent collaboration between developers and testers flows very naturally this way. Developers immediately learn what is wrong with the system and testers learn to understand and to be able to read the binding code by looking behind the developer's shoulder and operate much better in terms of "white box" or at least "grey box" testing. There is nothing sophisticated about the binding code. In fact, it has to be fairly simple by definition - those who write automated scenarios leverage simplicity in order to prevent false positive or false negative scenarios. It is really that simple! 

After an iteration or two, this approach becomes so obvious and natural to testers that all they had normally done previously, in terms of exploratory testing, seems ridiculously inefficient to them. This approach allows them to explore more, but also to capture findings so they could be run over and over again. It helps testers externalize valuable knowledge and turn it into better software as a result of collaboration with developers.

Specification by Example and Card-Conversation-Confirmation

By Alex Yakyma. 

Recently I happened to encounter a lack of specificity in a purportedly trivial situation. I was at a restaurant in Longmont, CO and wanted to start in my rather typical way, having tea with honey. Given that it was snowing, seems like a quite natural thing. I ordered "black tea with honey", the waitress asked "why black?", to which I answered that, in some countries, unless you say "black tea" or "black coffee", you will receive it with milk. Also, I noted that, this way, one could distinguish it from green tea, which is an entirely different thing. She smiled and, in a minute, I got my... you guessed it... glass of ice tea (actually with a lot of ice in it) and a little vessel with honey on the side.
 I was speechless, and she disappeared before I recovered. I felt like Eric Cartman with his funny-fuse, when he wasn't able to laugh any more :) Anyways, to make a long story short, I knew what I wanted, and we even had a chat about it. I thought we had even defined everything clearly, but turns out we didn't. Sound familiar?

I think XP folks impacted on the software development industry much more significantly than we may have otherwise thought. In fact, user stories aren't a Scrum invention; this was institutionalized in XP. In particular, Ron Jeffries, defined what we know as the "3C":  Card, Conversation and Confirmation - three important aspects of user stories (actually any stories, not just user ones). The "triad" actually means physical card, on which the story is written – a conversation between those who define and those who implement – to clarify detail and, finally, the acceptance criteria by which it is decided whether a job is done. This is a great conceptualization of what is necessary to keep in mind for teams. Nevertheless, the 3C itself is dependent on how effective a conversation is in terms of identifying detail; otherwise... you may get your ice tea with honey.

A useful technique to make the 3C work efficiently is Specification by Example - a collaborative approach to defining requirements based on concrete examples, instead of abstract statements. The "Second C" in this case gets recurrently supplied by examples, which become a natural way for requirements’ disambiguation for the team and stakeholders. What's important is that the "examples" are goal-driven and thus, the Conversation continues to be meaningful. So, had I said that, to warm up in such cold weather, I would need a cup of good Earl Grey tea with honey, then there likely would never have been such confusion in the first place. By the way, there indeed happens to be waiters and waitresses that have a good facilitation skill for thing like Specification by Example. But these are as rare as good facilitators among people in the software domain. Nevertheless, here's a set of simple recommendations that I give during ATDD coaching:
  • Refine requirements collaboratively as a team, involve external stakeholders as much as possible; keep in mind that your co-dependent agile team is just another "stakeholder", so invite some representatives.
  • Include the Specification Workshop (where Specification by Example actually comes into play) into every backlog grooming and sprint planning session.
  • Actively use BVIRs (whiteboards, flipcharts, etc.).
  • Use tables to represent the variations in data, flow conditions - virtually everything. Table format is the most powerful collaborative tool for refining requirements. Regardless of whether it happens later, it may or may not be represented in a table format in your ATDD tool.
  • Do not allow any abstractions in the examples. Avoid those "X"s and "Y"s etc. at all cost. You will be surprised how much your team will learn about the seemingly familiar user story once they replace all Xs and Ys with real values.
  • Stay timeboxed story-wise and move to the next story every 10-15 minutes. If not done with the story, no worries - you can return to it. Just don't let it go too deep and consume too much time.
  • Capture all agreed upon examples - these are your acceptance tests. Capture all open questions (to stakeholders or otherwise) and resolve them first.
  • Have Specification Workshops at every iteration - team self-discipline is key here. Scrum Masters can find this a real challenge for their facilitation, coaching and mentoring skills, which assist their team in adopting and sustaining Specification by Example as an effective cadence based activity.

 It is amazing how much a team can achieve when they know what they are building. User stories, as "containers" of value, in conjunction with concrete examples, represent an extremely powerful tool that makes a team extremely productive in delivering business value incrementally.

ATDD with JUnit: No Extra Cost!

By Alex Yakyma

There's a couple of good ATDD tools already on the market and all of those are free ones but it doesn't come all free eventually. Certainly there's learning curve for each. And even more so, there's real danger that the team may not necessarily adopt them which may influence badly the adoption of ATDD itself.

There's a surprising news for teams though: if they are using JUnit (or actually almost any unit testing framework from the xUnit family) - it can be used for ATDD very efficiently.

Let me show this on an extremely simplistic example. We're building simple Calculator service, that's the brief context.

First, let's specify our examples of system behavior that will serve us also as specification of the acceptance tests.
------------------------------
Adding 2 and 3 is 5

Adding 0 and -1 is -1

Multiplying 3 by 6 is 18

Multiplying 4 by 0 is 0

Multiplying -1 by 16 is -16

Multiplying 7 by 1 is 7
------------------------------

...it's very straightforward, isn't it?

Now what we do is implement the test (using JUnit):

Notice that for all these tests there's no an important nuance that many developers would miss while working on their unit tests:

Each assertion now includes the message and the message is exactly the specification.

This basically performs one of the key roles of ATDD - sustaining a living documentation and traceability to code. Now, if something fails, you know what exactly that is:

As we can see, now JUnit tells us what exactly goes wrong with the code, or better say - what exactly it fails performing at this moment of time.

Very simple!.. And if your team already uses JUnit then there's no additional effort to master a new tool, so you can concentrate on the technique itself.

This reminds me to mention in the end that tooling really secondary to ATDD, what is first and foremost is to ensure that those examples that the team is automating and then implementing come from the the people in the company who actually know what their client wants.