Friday, February 28, 2014

Governance - Strategy as a key Constraint on Delivery


I often hear the term "we need better governance of project XXX" or "they don't understand the value of good governance" from project managers, members of the PMO (Project Management Office) and the such like. While it's easy to assume that the people saying such things are striving for administrative excellence as opposed to technical excellence (thanks to Jim Highsmith for that wording), I do believe there is a role for good governance. The real question is: what the hell is governance?

Somewhat unusually for a member of a software development team, I also have experience on the board of a couple of different organisations - ranging from an alcoholic beverages company to a non-profit organisation. This makes my take on governance slightly different to the usual "project is meeting cost, time, and quality constraints".

Imagine, for a second if you are responsible to the shareholders of an organisation for performance of a company and the shareholders (or the market analysts) asked you how the company was performing. If you said "We're on track to meet cost, quality, and time constraints" you'd quite rightly be fired. It's a bad way to govern an organisation - and an equally bad way to govern projects.

Governance, for me anyway, is about defining the minimum set of constraints so that the organisation can deliver value. However, that just raises two different questions: what are constraints and what is value. Value is easy to define - it's usually money except in the non-profit sector (better called the "not for loss" sector") where it is some non-financial outcome. Value is generally pretty well defined by the organisation - and if value is not well defined then the organisations governing body knows where to focus first!

From a constraint perspective, I can think of four types of constraints:

  1. Strategy, 
  2. Risk Mitigation, 
  3. Cost, and 
  4. Timeframe. 

It might seem slightly odd that strategy is a constraint - but it makes sense (to me anyway). Richard Rumelt wrote in "good strategy bad strategy" that the kernel of a strategy has three things. First, a description of what is going on in organisations environment. Second, a set of guiding principles for the organisation to track a path through the environment. Third, a set of coherent actions the organisation will perform. Those three things provide a set of constraints that govern what the organisation will do without defining how the organisation will do it.

Even Ross, Weill, and Robertson's architecture for governing technology can be considered a set of constraints on how we integrate data and standardise processes as we implement new business systems.

Anyway, once an organisation has clearly defined the constraints on performance then governance becomes a straightforward exercise (but not easy). We just need to keep asking these questions:

  1. Are we getting the right information to determine if the organisation is adhering to the constraints?
  2. Is the organisation adhering to the constraints - and why not?
  3. Are the constraints still valid?
It's also worth repeating Jim Highsmith's words here: we prefer delivering value over meeting constraints. If we are not delivering value then we should not be in business (well, we won't be in business for much longer anyway). If we have to break constraints along the way then that is a worthy conversation with the board (or the project steering committee) as to the validity of the constraints - or the validity of the business model.



Wednesday, February 26, 2014

Non-linear Software Development Workflow

Software development, like many manufacturing processes, is all about work dependancies. We do one thing so that another person can do their job. For example, testers need working software to start their testing. Well, perhaps they need working software to finish their testing.

Here's the interesting thing. In our team (a scrum team working developing a payment switch) the testers don't need working software to finish their testing. We have managed to remove the dependancy between software development and software testing. Let me explain how.

First, however, waterfall. In 1970 a dude (almost everyone working in CompSci in 1970 was a dude) proposed the waterfall model. In this model, we follow a strict series of steps to produce software. The steps are:

(Ignore for the moment that Royce also said that the waterfall model doesn't work for large software projects - almost everyone else ignored him so we can as well :)

In the waterfall model, we have a strict "finish -> start" dependancy. For example, requirements must finish before design must start. This problem is also present in the iterative methodologies (I also dropped "maintenance" from the flow as maintenance is just another loop around the cycle):


However, then agile came along and some very clever people (I've heard this idea from both Alastair Cockburn and Mick Cohn) realised that the dependancy between each stage is not a "finish-start" dependancy but it is a "finish->finish" dependancy. What that means is that testing can start before development finishes, but testing can't finish before development finishes.


This model is really useful in methodologies that have short time-boxed sprints - like Scrum (which, potentially, is not really a methodology but a "Reflective Improvement Framework" but we'll leave Alastair Cockburn with that interesting definition). It's useful because it means that team members can work mostly in parallel during a sprint.

Now, here is the interesting thing. With methodologies that have short time-boxed sprints your team starts getting really good at breaking down features into tiny stories - little pieces of functionality that deliver value to the user or customer. They represent externally visible changes in system behaviour that can be developed and tested (they also might be tokens for work, but that's a discussion for another time). The most important thing is that they are small. Very small. They can be as small as "text field for name displayed on screen" and then the project can have separate stories for data entry, field validation, security, etc. Some people have been known to slice keyboard and mouse navigation for an interface into different stories.

With stories this small it is entirely feasible for the entire team to sit down and very quickly get a testable shared understanding of the story. Then each discipline can go away and start work. The testers can write automated tests for the story and check them into the system. The developers can write code. The BA can resolve any ambiguity and communicate it to both the testers and developers.

This creates what I call a "shared start dependancy with a deadline". The shared start dependancy is the creation of a testable collective understanding in the team. The deadline is the end of the iteration.


Easy eh! Well no. In our team - where this happens regularly - there were several things we needed before this behaviour emerged. The things were:

  • An automated test suite where testers can specify tests without reference to the user interface. We used Concordion.
  • An automated test suite where testers can check in tests that aren't run by default (to give the developers a chance to build the feature before the continuous build system starts failing the tests).
    • We used the Jenkins CI server. Tests were stored in a Git repository - the testers used SourceTree to check their tests in. 
    • In the Maven build file we told Maven to only run Concordion tests called "indexTest.java" and then referenced active tests from the those files using the "c:run" annotation.
    • One Concordion hint: use Map<String, Object> as your return value for most of your fixtures. Look at returning a map result - but sadly no direct link to the correct section.
  • Maturity in slicing stories smaller and smaller. That took the current team 6 months of development - but we had a very low level of scrum experience when we started.
  • Much collaboration and a realisation that it was possible for a tester to specify the tests before the development started.

Wow. Thanks for reading this far!