Gherkin Tips & Tricks

A few months ago, I was selected to give a presentation at the Testnet ‘voorjaarsevenement’. My talk was about Gherkin and how to improve your Feature files and step definitions.  It’s time to put it on the blog.

Instead of writing an introduction to Gherkin, it’s better to point you to the Cucumber wiki. They have a very good explanation of it. So let’s move on to the tips & tricks!

Feature file tips

  • Avoid long descriptions
    Features should have a short and sensible title and description. This improves readability and you do want readable and understandable Feature files, don’t you? So there should be one sentence describing the scope and content.
  • Choose a single fomat
    I don’t care if you pick “As a [role], I want [feature] so that [benefit]”  or “In order to [benefit], as a [role] I want [feature]”. However, pick one and stick with it. Again, this improves the readability. And never forget the benefit, this makes it easier to decide on the business value.
  • Features… not big portions of the application
    There should only be one Feature per file and the Feature should be reflected in the file name. If you work in larger teams, it is easier to work parallel on smaller Feature files.
  • Domain language
    Involve the customers and use their domain language. Involve them in writing the user stories or at least have them review them. Keep the language consistent accross all Features (and even projects).
  • Organization
    Organize your Features and Scenarios with the same discipline like you would organize code. You can organize them according to execution time: fast (<1/10s), slow (<1s) or glacial (>1s).  Or put them in separate subdirectories. Or tag them.

Background tips

  • Use Backgrounds
    To avoid repetitions in the feature file, Backgrounds are ideal. However, since the user has to keep the Background in mind while reading or writing the Scenarios, keep them short. Max. 4 lines.
  • Don’t include technical stuff
    The Feature file is about the user. Starting or stopping the webserver, clearing tables, etc can be implemented in the Step Definitions. it shouldn’t be mentioned in the background.
    Of course, don’t use a Background if you have only one Scenario.
  • Don’t mix
    Don’t mix backgrounds in the Feature file with @before hooks in the Step Definitions. This will drive you nuts when debugging.

Scenario tips

  • Scenario vs Scenario Outline
    In fact it is very simple: if you have only one example, use a Scenario.
    If you have more than one example, use a Scenario Outline and a table
  • Short
    Keep your scenarios short.
    Hide the implementation details.
  • Declarative steps over imperative steps
    Using declarative steps is much more concise and about WHAT the user wants to do with the system, not HOW the user wants to do it.
    Declarative example:
    Given I have logged on to the system
    Then I should see my new messages
    Imperative example:
    Give I am on the login page
    When I fill in the username
    And I fill in the password
    And I click on the ‘Submit’ button
    Then I should be logged on to the system
    And I should see my new messages

Step tips

  • AND/OR are keywords
    so don’t use them within a step
    Given I’m on the homepage and logged on
    Should be
    Given I’m on the homepage
    And I’m logged on
  • Cover happy and non-happy paths
    Testing is more than only proving it works. It’s trying to break the system.
  • Refactor
    Your library of steps will increase in time, so try to generalize your steps to increase reuse.
    You understanding of the domain will increase also, so update your language and the steps.

Tag Tips

  • Never tag the background
    Tags allow you to organize your Features and Scenarios. You can have multiple Tags per Feature or Scenario, so never tag the Background.
  • Don’t tag Scenario with same Tag as Feature
    Feature Tags are also valid for all child Scenarios, so there is no need to apply the same Tag to the Scenarios.
  • What is the benefit of Tagging a Feature
    You can Tag individual Scenarios, so think about what value would be added by Tagging an entire Feature.  There may not be much use for it. Except perhaps Tagging the Feature with the story number.
  • Tag categories
    Possible tag categories may be
    Frequency of execution: @checkin, @hourly, @daily, @nightly
    Dependencies: @local, @database, @fixtures, @proxy
    Level: @acceptance, @smoke, @sanity
    Environment: @integration, @test, @stage, @live
    Some groups also Tag according to progress: @wip, @todo, @implemented, @blocked.
    I’m not saying this is bad, if you do, make sure you keep the tags up-to-date! If you can’t do that, don’t use them.

Step Definition Tips

Most of the tips below will increase the reuse of your Step Definitions and the readability of the Feature files.

  • Use flexible pluralization
    Add a ? after the pluralized word:
    Then /^the users? should receive an email$/ do
    Now it will match both user and users
  • Use non-capturing groups
    Instead of (some text), use (?:some text). Now the result is not captures and not passed as an argument to your step definition.
    It is especially useful in combination with alternation:
    When /^(?I|they) create a profile%/
    And /^once the files? (?:have|has) finished processing*/ do
  • Consolidate Step definitions
    When /^the file is (not)? present$/ do |negate|
        negate ? check_if_file_is_not_present : check_if_file_is_present
  • Use unanchored regular expressions
    Normally you anchor start with ^ and end with $. Sometimes it might be useful to omit one:
    Then /^wait (\d+) seconds/ do |seconds|
    This will allow for more flexible expressive steps:
    Then wait 2 seconds for the calculation to finish
    Then wait 5 seconds while the document is converted
    Of course this can be dangerous, so I think you can only do this for this example 😉
  • Be DRY
    Don’t Repeat Yourself.
    Refactor when necessary and reuse Step Definitions within a project across Features and perhaps even across projects.
  • Parse date/time in a natural way
    For most programming languages there are libraries that allow you to parse dates and times in a more natural way. For example in Ruby you have Chronic and in Python you can use parsedatetime or pyparsing.

So there we have it: quite a few tips to improve your Gherkin Feature files, Backgrounds and Step Definitions.

However, most important tip I can give you is to exercise discipline when writing your test automation code:

  • treat your code as production code
  • refactor when necessary
  • run your tests as often as possible
  • and don’t be too smart: somebody needs to understand next year. And that person might even be you.

For your benefit I have included my presentation and the checklists I created.

Quality Center and BPT testing experiences

At my current customer we are working extensively with Business Process Testing (BPT) in Quality Center 10, and now recently ALM 11.52.

As you may know, BPT is an implementation of keyword-driven test automation.  The idea of BPT (or keyword-driven testing) is very nice:

  • test analyst can define Business Components (BC’s) (or keywords)
  • test engineer can implement these BC’s
  • test analyst can use the BC’s in test cases

BPTThe benefits can be big:

  • Nice separation between test analyst and test engineer.  It can be difficult to find a test analyst who can also write test code, or a test engineer who is willing to learn the business processes.
  • Test ‘scripts’ become more readable in keyword form.

HP has several documents and articles on BPT:

Unfortunately, we have two issues: 1) BPT in Quality Center is badly implemented, and 2) our usage of BPT is also badly implemented.

1) BPT in Quality Center is badly implemented

  • The editor doesn’t allow copy/past within a BPT test. This has been a problem in QC10, it still is a problem in ALM11.52 and apparently it is now possible to open a BPT test in UFT.  Guess what: it is not possible to open 2 BPT testcases in one UFT session, therefore it still is not possible to copy several steps between BPT testcases.  It is also not possible to copy a few BC calls several times in a test case; the BC has to be imported every time.  Very frustrating!
  • The editor isn’t flexible.  In QC10 the editor displayed the BC’s in a grid like manner: each row is a call to a BC, the columns were the names of the BC’s, status, input parameters, output parameters, …
    BPT_QC10As you can see, if BC’s have several parameters, they are all listed. This means only a few BC’s fit on the screen.  The user has lost the overview of the test case.
    Retrieving all the values of the parameters made this view also very slow.
    In ALM11.52 the editor changed: in grid view:
    BPT_ALM11_gridYes, it loads faster now and the test has the overview of the complete test case. However, what HP giveth, HP taketh away also: parameters are now not as easily visible: an extra screen has to be opened.  And you know HP: never remember the settings for a window! So every time you have to resize the window and columns.
  • The editor doesn’t allow to comment BPT test or flow steps. This makes it difficult to read the test case when editing it, but also the test report becomes less readable. For the last effect, we have made a “Add_comment” BC that gives info in the test report.  However in ALM11.52 the parameter is not visible in the BPT editor. Other solution is to group a few steps and you can rename the group to a more meaningfull name.  Grouping adds another extra click to retrieve the parameters of the BC’s in the group.  And the group name doesn’t come back in the test report…
  • BPT has a disastrous effect on our QLM performance.  In QC10 we had to turn of version control because we have so many BPT tests.  A few months ago we updated to ALM11.52 with version control turned on.  After a week, we had to turn off version control again due to performance reasons.

There are several more issues with BPT/ALM, but these are the most important ones for us.

2) Our usage of BPT is badly implemented

  • We have too detailed BC’s.  Our BC’s should be on the level of “CreateDepartureFlight with these parameters”.  However, they are on the level of “Goto_Tab”, “Open FlightEditor”, “Edit Flight”, “Save flight” with several steps in between.  This has several consequences:
    • Test cases become unreadable because there is too much detail in them. Since ALM11.52 doesn’t show the parameters anymore it is difficult to see what exactly the test case does.
    • Test execution becomes very slow. In QC10 this was even worse, in ALM11.52 HP introduced the BPTWrapperTest.  This speeds up the test execution (or at least the starting of the test). Sometimes you might want to disable it.
    • BC code becomes less maintainable, since each BC is implemented in a separate .vbs file. Now we are moving to an architecture where multiple BC’s are implemented in one .vbs file and the BC files only make calls to those files.

All these issues compounded lead to an almost unworkable, instable test automation ‘solution’.  Time to search for something else…

Test data generation tools

As a tester you’ll often need to generate test data.  A lot can be achieved with Microsoft Excel or Open/LibreOffice Calc or any other spreadsheet. However there are tools specialized in generating test data.

Several commercial tools focus at generating big data, like those of Grid-Tools, Tricentis and probably several others. Since I haven’t used these, I’m not going to discuss them.

There are also several smaller, free tools that you can use!

  • GenerateData: open source data generator with lots of possible fields, even country specific ones. Very useful.
  • Fake Name Generator:generates a single identify for online use.  Especially useful for online social networks. Disadvantage is that it can’t generate a whole CVS files with hundreds or thousands of records.
  • Mockaroo: seems simple, but it also allows regular expressions, so almost everything is possible.
  • Credit card numbers: has a list of fake credit card numbers conform to the MOD10 algorithm.
  • Identity generator: allows to generate various types of files with fake data, specific data fields for UK, US, Canada and the Netherlands are available.
  • TextMechanic: not so much test data generation, as a set of tools to manipulate text online.
  • Random: list of free and paid online tools to generate data, but also pick random data (times, dates, passwords, geographical data, …)
  • TypeIt: if you need to test internationalization of your application. Generates/displays all characters in lots of character sets.
  • BabelCode: slideshow of all possible unicode characters.  Needs more than 3 hours to display each character it knows!

In my opinion these are indispensable tools for any serious tester.  Add them to your bookmarks and start using them.

Book review: Batch Testen

A book review I’ve been meaning to write for some time: Batch Testen by ir. Dew Persad.

Batch Testen bookcover

A few months ago a possible client in the printing industry asked for an assessment of their testing efforts.  Most of their software runs in batches.  Having worked mostly in the embedded industry and on webbased software, this was a quite new subject for me.  So during my holiday in France I ordered this book and studied it. It was quite useful for the assessment!  The book is in Dutch.

The summary of the book tells it gives a structured method to optimise batches according to the BATCH method (good name!) that has 5 phases each with 3 steps. I’m not going into detail on the steps, these are very clear in the book.  The first part of the book deals with why good quality in batches in necessary and goes into depth on the BATCH method:

B: Business know how: in this phase, one should get an understanding of the business: which batch processes are present, which ones are essential, determine the risks, order of the batches, etc.  This phase is of course essential for the next phase.

A: Analyse the batches: in this phase, one gets to look in depth at the batches.  Taking into necessity and need of each batch process, datatables used, acceptation criteria, when to run the processes, etc.  The last step is an in-depth analysis of the batch using the Software Architecture Document with 6 views:

  • Use case view: looking at functionality
  • Test view: way of testing
  • Domain view: looking at the data
  • Logical view: looking at the components of the system
  • Implementation view: looking at the requirements
  • Deployment view: looking at the technical infrastructure

This phase may take quite some time to get a good understanding of the batch processes.  If done well, it really helps well in the next phases

T: Test the batches: The reason why I bought the book! There are 3 steps:

  1. Test analysis (in the book it’s called Testopzet, I don’t know a good English translation)
  2. Test preparation
  3. Test execution

The book gives a nice table with deliverables of the batch test team.  This is really useful, since these deliverables are what make batch testin different from other testing.  (that’s my opinion!)

C: Check the results:   in this phase one has to verify the output of the batches.  The business should accept (or not) the batches in this phase.

H: Handover to business:  Batches that are not managed well, can have performance degradation or even lock up after a few months.  Erroneous input  date can multiply over time.  So it is of essence to manage the batches.  It is necessary to have the business knowledge and technical knowledge to maintain them.

At the end of the first part of the book is a chapter on running the batches in the production environment.  This is something not familiar for many developers, so it is an interesting read.

The second part of the book goes into detail on

  • Quality of batch processes
  • Development of batch processes
  • Testing
  • Technical infrastructure
  • Extra necessities for the batch roles

I was of course very interested in the testing chapter. Well, it seems testing batch processes is not so different from testing any other type of software.  Same techniques, processes, etc apply.  The book uses the V-model as the basis for the testing of batch processes.  It doesn’t go into detail on the applicability of, for example, Agile testing.  In my opinion there is a small chance of an organisation developing batch processes using an Agile approach.

Conclusion:  If you are using batch processes in your organisation, I think this book is really an addition.  It is quite hands-on, not too theoretical.  It gives quite a few tips and tricks and the BATCH method is very useful as a basis for your own development process.  Easy to read and brief (164 pages), it provides the basis and a framework, but you have to tailor it to your own needs.

All in all, this book is really an addition to the large number of testing books already out there.  I would recommend to translate it also in English for a wider audience.