User story estimation
Recently, I was approached by an Agile tester. He had a dispute with his team about how they should do their estimations. “When we started to assign story points to the user stories, it came to light that not everyone thought the same way about how to estimate these points,” he explained in his email. “Some developers excluded the test work and made an estimation for the development work only, while others included writing the test automation scripts and ignored the manual testing. Can you share your view so I can benchmark my ideas and address the issue with my team?”
The planning and estimation done in Scrum teams is no rocket science. Try to stay away from complex calculations. Rather than providing estimates with multiple decimals that give a false feeling of accuracy, create a lightweight process that enables teams to determine what to focus on and give a forecast on when they’ll work on and complete certain user stories.
Done
Common practice is to assign story points to each work item indicating the workload it represents. Some teams use poker as a technique to make the estimate more accurate and involve the opinions of the various team members. I think that’s a good practice but not a necessity as there are more ways to come to a balanced estimate. The key is that team members share the assumptions that determine their estimate. This way, they get a better understanding of the work that needs to be done and can be more accurate in their estimate.
Combined with empirical information on how much the team completed in the preceding iterations, a forecast can be made. This forecasting helps manage dependencies with other teams so that we reduce surprises and delays. It also tells us what items the team thinks it can complete in the coming iteration. The keyword here is “complete” as Agile teams are supposed to deliver a done increment at the end of each iteration. Idealistically, each item should be ready for production and therefore be complete, integrated, tested and documented.
An interesting exercise is to check the Definition of Done - if your team has one. The DoD defines when we consider an item to be done. You’d expect it to state what the team considers to be complete (eg all requirements fulfilled and both the system as well as the automated tests are checked in), integrated (eg merged with the main version of the total product), tested (eg unit tests, functional and non-functional tests and the integration tests that need to be completed successfully) and what needs to be documented at the technical or functional level. Discussing the DoD helps understand the team’s scope and determine what activities should be considered when making an estimate.
Collective responsibility
So, should testing be included in the estimation? If the team is expected to deliver tested solutions, the answer is simply yes. And since the whole team is responsible for delivering the items it commits to, all members should be involved in the estimation. Everyone should agree with the story points assigned.
I encourage all team members to participate in giving one overall estimation for each item. Of course, developers will be more confident about estimating development work, as testers will probably be about testing, but splitting estimates into various disciplines makes it overly complex. Remember, it isn’t rocket science. Such a split also undermines the team’s collective responsibility for all activities in the Definition of Done. Last but not least, when you use different insights as a starting point for a dialogue, this leads to a better understanding of each other’s work and a more accurate estimation.