Thanks to everyone who attended our webinar, 10 Things You Didn't Know You Didn't Know About BDD. We hope that we busted some common myths and provided insights on how to make BDD adoption on teams successful. We had a great discussion and ended up with additional questions by time ran out. As promised, here is our follow up. If you would like to watch the replay, check it out here.
Who writes the requirements in BDD approach? is it business analyst for product or product manager or someone else?
I have written a bit about this workflow here.
The goal is for the Gherkin feature files (which is where you express your requirements) to be a reflection of two things:
- Your team’s shared understanding of the requirements (the desired behaviour)
- What the code actually does
To achieve this, you can’t have any one-person work on them in isolation, at least not for long. You need everyone’s input.
Ideally, you would all sit and write them together, in real-time, until you’re all agreed on what they say, but in a lot of contexts that isn't possible.
As BDD practitioners, we try to accept the fact that our ignorance about what we’re building is one of our biggest barriers to progress and make deliberate efforts to uncover and confront that ignorance. With that view, it makes sense for those with the most to learn to try to write at least the first draft of a Gherkin document.
Otherwise, you are just throwing documentation “over the wall” like we did in the bad old days.
Are you saying that automation for stories in development need to be done at the same time?"
BDD stands for Behaviour Driven Development. That means we drive the development of the code based on automated examples that express the behaviour we want. It’s another way to describe the practice of Test-Driven Development (TDD).
We start with an automated example that fails, because we haven’t touched the application code yet.
This becomes our guiderails for the solution. The black-and-white outlines in our kids colouring-in book. Now we work on changing the application code until the automated example is passing. Now we know we have two things:
- A system that exhibits the behaviour described in the example
- A test that will fail if the behaviour ever regresses
What is a good strategy to introduce BDD for partially developed project?
This is most people’s context, so it’s important to talk about. We wrote about this in the chapter “Working with Legacy Applications” in The Cucumber Book / The Cucumber for Java Book, and Michael Features does an amazing job of describing ways of achieving testability in legacy codebases in his book, Working Effectively With Legacy Code.
In summary though, I would probably start with defects. They tend to come pre-packaged as an example and are nice and self-contained in terms of the desired behaviour. Also, defects tend to pop up around the places in the code where we make mistakes, either because we’re working there a lot, or because the code is hard to understand, or both. So, it makes sense to hit these areas of your codebase first.
As you start to want to use BDD for new behaviour, I would use a practice Richard Lawrence taught me about, called the Slow Lane.
With the Slow Lane pattern, instead of trying to adopt BDD across all your stories at once, you recognise that there’s a learning curve, so you start by just picking one, simple, story for your first experiment. But you commit that you will do this story by the book. You will run discovery workshops until you’re agreed on the rules and examples that scope it. You will collaborate to formulate your Gherkin scenarios to describe that behaviour, iterating and refining them until they reflect your shared understanding. Then, you’ll do the work to automate those scenarios against your system, keeping disciplined to leave the implementation of the application code until you have seen the scenario fail for the first time.
The first one or two stories will take much longer than normal, because you’ll be learning this new process, and figuring out how to test your application in this new way. Gradually though, you’ll get the hang of it, and this lane will get faster. You’ll want to pull more stories into this lane, until eventually this is how you do all your work, because you’ve realized how awesome it is.
My team doesn't have requirements. Writing tests as development is going on can be difficult. We have an evolving design going during testing.
Sometimes, the only way to learn what your users want is to give them software to try, but it is by far the most expensive way. I don’t know enough about your context to know whether writing Gherkin together (which is definitely a much quicker way to figure out what you want the software to do than actually writing it) is the right tool for the job, but waiting until you’ve already written code before you decide what it should do sounds like a recipe for disaster to me.
I think having the automation first as code is being written while there are little requirements leads to throw away work.
On the contrary, when we make an effort to invest in automation before we write any code, we are forced to see gaps in our requirements straight away. How are we going to write an automated test when we don’t know what it should prove?
BDD and TDD practitioners tend to ask “why” a lot more often and challenge their product owners to be able to explain their thinking in detail.
It is very important that we have a culture in our teams and organisations that values people asking these kinds of questions. We can’t build software if we don’t know what it’s supposed to do.
It is preferred to start with regression testing on code that is not volatile. Do you agree?
Actually, I disagree. If the code isn’t changing much, you aren’t going to get much bang for your buck by protecting it with automated tests. We put seatbelts on people, not on luggage, because they’re the things we don’t want to slip loose. Similarly, we want automated tests around the code that’s most likely to go wrong.
What types of automation do you mean for the above?
Point #8 was Automated tests are part of how we write the code; they’re not “someone else’s job”
Here I am referring to two types of tests:
1) automated acceptance tests, which help us to build the right thing
2) automated programmer tests (also known as unit tests or microtests) which help us to build the thing right.
Automated acceptance tests are written in a way that we can show them to non-technical stakeholders (perhaps a business analyst on the team, or a product owner, or a non-technical tester) and check that the test is doing the right thing. These are the tests that help us to build the right thing. These tests will tend to bite off quite big chunks of the application, covering many layers in integration at the same time. They should also not try to be exhaustive, as this is a waste of time. That’s the job of the microtests.
Microtests, or programmer tests, or unit tests, are written in code, and test a specific small unit of the code such as an individual class or method. These are not designed to be read by non-technical folks, but they will definitely be implementing behaviour that those folks care about. That’s why we need to develop our shared understanding. These tests help us to build the thing right.
Both of these types of tests give us feedback. Acceptance tests tell us whether the system is doing what it’s supposed to, and if not, the microtests will tell us where we need to go to fix it.
When we’re writing code, we want that feedback continuously. We don’t want to wait days, not even hours, to find out that we’ve made a mistake. We want to run an up-to-date set of tests right now and find out whether it’s working or not.
When we write the tests first, we get guiderails that help us to rapidly iterate on our solution, and to refactor safely to ensure the code will be easy to change in the future.
Automated tests, for BDD practitioners, are like scaffolding for builders. Sure, you could probably put the building together by just standing on a ladder, but it would be risky and slow.
Does that mean that the QA department is n/a?
Yes and no.
I don’t like to see people driving a dividing line between “people who care about quality” and “people who make stuff”. After all, we want everyone to care about quality, right?
I also don’t like to see us working on the assumption that the “people who make stuff” will just make a bunch of mistakes, and there’s nothing we can do about that. This is the mode of working which Demming mocking referred to as “let’s make toast – I'll burn it, you scrape it”. It’s a stupid way to work.
That said, every team needs a balance of skills, perspectives and interests. Most developers (and I would include myself in this category) like to think about the happy path, about how to get something working. This naturally excludes thinking about edge-cases – it's a different headspace to be in. That’s not to say developers can’t do this, but they often need coaching and support to get better at it. So, we still need QA-minded people, to help us decide what tests we need to write.
Sure, some of those folks might want to get into coding and join in with that work, great, but they don’t need to. There’s still plenty to do.
So my advice would be to disband the QA department, but have all the QA people embed themselves in teams and help to keep the feedback loop short so nobody burns the toast in the first place.
Can you provide any guidelines / best practices for BDD features repo structure? Imagine you have a git repo that based on configuration produces several “products” from customer point of view. Is it better to keep everything in single repo or split code based on products?
Ultimately, you want to be able to use the feature files to get feedback from someone who understands the customers’ needs about whether you’re building the right thing. So they need to be structured in a way that makes sense to them.
The features should be a reflection of the code, and of your shared understanding of the problem domain. If there’s a mis-match between those two things you will experience some tension. It’s helpful to talk about this as a team, and I would recommend learning about Domain Driven Design here, to help to align your system’s bounded contexts with the problem domains you are working in.
How do you push the rest of the company to do ""conversations"" (aka discovering) when you are the only person doing automation?"
This can be tough.
My advice when trying to introduce any kind of a change is:
- Make sure you’re proposing it as a solution to a problem that everyone recognises and agrees needs to be tackled
- Don’t make a big deal out of it by bringing in lots of jargon
- Just propose it as an experiment
So, try to spot problems where that conversation might have saved your team time, and highlight this. If there’s a pattern, keep some notes and then let people know what you’ve observed. Then suggest “we could try having a dev, QA and business analyst sit down for half an hour before we work on the code, just to check we’re all on the same page.” Softly.
"Great webinar until now 😊 You mentioned that at the end we have like 100% knowledge about a given feature; well, I would kindly counterargue that you never have 100% knowledge about your software. BDD of course improves your knowledge, as you said, from the start. But 100% knowledge is a myth, wouldn’t you agree? :)
In your experience, these conversations (discovery) and formulation, they can occur naturally continuously? Because some of us may think that they are happening just at the project start.
Yes, I find it difficult to emphasize this, but we are working in small, rapid iterations. Teams who do this typically have 2 or three short example mapping sessions per week.
Can you also share some best practices to write gherkins. like how much should they be broken in details
This is a big topic. I recommend Seb’s forthcoming book, Formulation.
I have TestComplete. Do I need Cucumber as well? I see them as redundant.
It depends. TestComplete basically has an implementation of Cucumber inside it, so if that’s the tool you’re already used to, you can totally use that for BDD.
Cucumber has some advantages that you can run the tests from within the developers’ IDE, so it’s perhaps more conducive to encouraging the workflow of rapid iteration between tests and development.
I use TestComplete and the Jira add-on Zephyr. How can BDD fit into these tools?
"With BDD you don't need to ""manage"" tests in quite the same way. But Test management platforms can add analytics capability on top of what you achieve with BDD automated acceptance tests."
Do you see BDD applicable more at a Integration level rather than a component level testing (Unit testing)
Yes, it’s generally necessary to integrate few layers of your app to have it exhibit behaviour that’s interesting enough to describe in an acceptance test. That said, you should adhere to the principle of always tests at the lowest level posssible. So if the business rule you want to validate is implemented in a single object, then try to have your acceptance tests just pick up that one object and exercise it, rather than wrapping it up with the UI, the database etc. This just makes your tests slower and more clumsy.
How deep a BA/PO should go into details when writing US Acceptance Criteria if the QA/Dev is in charge with BDD scenarios?
See answer to #1
If QA is writing the given when then, BA is writing their acceptance criteria in which format? Any recommendation? today, they create user stories in Jira. We were planning to have our BA to write informative gherkin and then have our QA write the descriptive gherkin.
Try to remember that all of the writing down of anything, except for working code, is almost certainly waste. A steppingstone. A means to an end. A necessary evil.
So, try to minimize how much you write anything down. Try to have conversations, and then capture what you’ve agreed in those conversations into Gherkin documents that are immediately checked into your version control system, so that the team can immediately start implementing them.