Month: October 2009

Planning Poker

Planning Poker is a fast, card based approach to team estimation. User stories are presented to the team and discussed as a group to round out understanding. In Scrum, this can be played in the Sprint Planning Meeting.


  1. In the planning session each participant (player) gets a set of cards with the values 0, 0.5, 1, 2, 3, 5, 8, 13, 20, 40, 100, infinity and ?.
  2. One story is estimated at a time.
  3. Each story is discussed briefly.
  4. At the end of the discussion each player selects a card but doesn’t reveal his choice yet.
  5. The cards are all revealed at the same time.
  6. People with high estimates and low estimates are given a soap box to offer their justification for their estimate and then discussion continues. Often it is these discussions that reveal more aspects to be considered.
  7. Repeat the estimation process until a consensus is reached.
  8. If the average is greater than 20 points then the story should be split up and its parts estimated.

Additional Comments

All people who would work on delivery  are involved in estimation.

The cards are numbered as they are to account for the fact that the longer an estimate is, the more uncertainty it contains. Thus, if a developer wants to play a 6 he is forced to reconsider and either work through that some of the perceived uncertainty does not exist and play a 5, or accept a conservative estimate accounting for the uncertainty and play an 8.

Avoid Anchoring where a stated opinion about the amount of time involved in a task can skew the rest of the team’s estimates.

It is important to remember the fidelity of most estimation input data is poor, i.e. we are usually dealing with approximations and best-guesses for work effort. Software development is difficult to predict and we get diminishing returns beyond a certain point of investing more effort in the estimation process.


There’s a good description of a Planning Poker session by Crisp.
There are free templates for the cards, for example, first page, second page.
The cards can be purchased from various sources but the cheapest are the originals from Mountain Goat Software, Mike Cohn’s company.
For distributed teams there’s a Planning Poker web-based application.


I’ve bought “IronPython in Action” as I think IronPython may provide a useful scripting environment for our applications.

To allow scripting the client application must expose an API. As we’re coding we should consider what we want to expose.

The application lacks an authorisation mechanism currently but that is on the roadmap.

Microsoft has a Framework Designer toolset that would be useful for analysing what we expose.

We could also create extension points for addins.

Code Review

The Smart Bear Software company have a code review tool that they used in a huge case study with Cisco. From this they drew up some Best Practices. A summary of these is useful for our own code reviews.

  1. Review fewer than 200-400 lines of code (LOC) at a time. Beyond this the ability to find defects diminishes.
  2. Take your time with code review. Faster is not better. Keep it below 300-500 LOC per hour.
  3. You should never review code for more than 90 minutes at a stretch (although you should always spend at least five minutes reviewing code – even if it’s just one line).
  4. Author preparation eliminates the majority of defects so try to prepare notes and comments outside of the code for the review.
  5. Both author and reviewer should use a checklist as this helps to find omissions. Personal checklists are also useful.
  6. Verify that defects are actually fixed.
  7. Defects are positive. This is an opportunity to improve the code; for the author and reviewer to work as a team; for developers to unlearn bad habits; and for good mentoring. Defects must not be held against a developer in any way.
  8. Hubris matters. Reviewing a fifth to a third of your code will probably give you the maximum benefit with minimal time expenditure and reviewing 20% of your code is certainly better than none.
  9. Most effective reviews are conducted using a collaborative software tool to facilitate the review. Review Board is used for our peer review of code.


Gendarme is a code analysis tool that I think is superior to FxCop (although that doesn’t preclude us from using FxCop, too).

I downloaded  the binaries only package and ran the wizard against my assemblies and was impressed by the results. I think this is something I’d like to add as a report to our nightly builds in TeamCity.

Note that it recommends you select the optimise flag in your builds as otherwise the cruft in the IL can trigger false positives.