Monday, August 11, 2014

Using Metrics Across the Development Lifecycle

We’ve discussed metrics before, but I think it’s time to revisit the topic.  Different members of the team are going to want access to see different sets of metrics at different points in the cycle.  I think it’s critical for organizations to understand what metrics are easily available – or what can become available with a small amount of work.  Then you need to understand how to use the metrics to drive the behavior you want within the organization.  Metrics used the wrong way can create bad behavior.

I’ve known organizations in the past that rate their development team members based on the overall number of lines of code that they create.  I’ve attempted to explain to individuals that this encourages developers to focus on the wrong thing – how fast they can key in chunks of code without any care as to the number of bugs they are introducing into the application.  This may also encourage the development team to create chunks of code that are not optimized.

You may not think this is a big deal.  But I’ve seen real instances where a company has a transaction loop that is measured in milliseconds, just to have their transaction time crater because some joker hasn’t optimized their code.  Instead of being able to handle a 1000 transactions a second, they find out they can handle less than 10 transactions a second.  Yeah the guy got the code done ahead of schedule, but at the cost of preventing the system from processing transactions.  Don’t worry.  The change never saw the light of day.

I’ve also seen organizations that award bonuses to team members by how early they can deliver the functionality – a sliding bonus based on how early the product is delivered to production, the earlier you deliver, the bigger the bonus.  This shifts the focus to how fast the system can be assembled vs how well the system matches up to the original requirements or how many bugs found their way through to production.  Just a tad short sighted if you ask me.

Tracking metrics is a fine line.  You want to create a set of metrics that allows you to understand what is happening within your lifecycle, but you need to identify and balance the metrics to get the real behavior you want from the team.  Ultimately, you need to produce a product or an enhancement to an existing product from the project that is as free of bugs as possible and that delights the customer.  If you do your job right, you’ll deliver more than the customer was expecting (under promise and over deliver).  Oh, and you'll also produce a product that drives additional revenue to the organization.

NOTE: The metrics identified below are things that I’ve seen work across the various organizations I’ve been involved with over the years.  These may or may not work for your organization.  It is important to look at the culture of your individual organization and identify what you value, what needs to be measured and how that information will ultimately be used.  Do I use all of these in the role that I currently play – no.  Some of them just don’t make sense for the way that we manage our projects and others won’t work because some of our processes haven’t fully matured.

I've seen organizations use multiple specialized roles to get through the first couple of phases of the life cycle - variously titled: Product Managers, Product Owners, Business Analysts, Process Engineers, Product Engineers ... the list could go on and on.  That said, all of the roles have the responsibility to reflect the needs of the project sponsor.

Through the scoping/discovery phase of a project, the team should be fully focused on eliciting the needs and expectations of the Sponsor.  This allows you to draw the box around what needs to be delivered and to begin to identify what will be called success.  All that said, how do you measure the success of these individuals - what metrics make sense?

The Project Office may consider the project a success if it is delivered on-time and on-budget.  However, the Sponsors perspective is more concerned about the impact that the project has post implementation.  They would not have authorized the project to move forward unless they were expecting efficiency improvements, cost reductions or revenue increases.  Ultimately, the project was approved for some underlying reason.  It was not approved to provide busy work to a bunch of people across the organization.

From the Project Office perspective, the key metrics that they will watch - again remember they are concerned about time, costs and resources - occurs during the overall life cycle of the project.
  1. What is the % of Project Change Requests against the identified scope that do not alter the overall boundaries of the project – minor misses?
  2. What is the % of Project Change Requests that increase the original scope of the project enlarging the overall boundaries of the project – major functional or non-functional pieces that were missed – major misses?
  3. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
From the Project Sponsor perspective, the key metrics that they will watch will happen after the project is considered complete and has been moved in to production.
  1. Did the company achieve the expected efficiency improvements, costs savings or revenue increases?  Does the project meet the objectives identified and agreed to within the business case?
  2. Did the company hit the objectives for the total number of clients adopting the solution?
NOTE: Take a moment to reflect on the different viewpoints – the Project Office may consider the project a success due to the fact that it was delivered on-time and on-budget.  However, six months to a year later, the Project Sponsor may declare that the project was a failure because the expected increase in revenue didn’t appear.  What processes do you have in place to support the different needs of the Project Sponsor vs. the Project Office?  What feedback do you give to your Product Owner? 

As the project moves through the formal requirements definition phase, responsibilities shift and additional resources are brought into the team.  Additional artifacts are generated – the business case, requirements, business impacts, security assessments, risks and mitigation plans.  Quite a bit is going on in this phase and this really begins to put the underlying structure in place that will give shape to the product that is being built or enhanced.

The Project Office will have primary interest in the success of this phase and may want to look at measuring success of the team when the phase is complete based on the following metrics:
  1. What is the % of Project Change Requests against identified requirements – minor misses?
  2. What is the % of Project Change Requests that introduce new requirements – major misses?
  3. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
There are multiple team members defining requirements – one individual should be accountable for the overall set of requirements, but accountability should occur across those responsible for contributing to the set of requirements impacting their slice of the organization.  You may want to track down to the contributors so that you can assess weakness across projects and across the organization.  For example, you may see a pattern emerge across several projects that would indicate that requirements associated with the sales team are consistently seeing change requests – you then have something to act on.

As the project moves downstream we begin to do the formal design work – this is where vision hits reality.  The team gets larger and begins to focus on breaking down requirements or combining requirements to understand what is being requested and translating that into technical terms that define what will actually be delivered.  If the Project Sponsor wants a hard dose of reality – this is usually the place and time for that cold water to be thrown in their face.  Sometimes what is wanted can’t be delivered – or at least can’t be delivered without significant pain.

All that said, the design is laid down and the Project Office needs to measure the success of the design using the following metrics when activity within the phase is completed:
  1. What is the % of Requirements that can’t be traced through to the Design – major misses?
  2. How many defects are identified in production that can be traced back to Architectural Design?
  3. How many defects are identified in testing that can be traced back to Architectural Design?
  4. How many changes are identified within the architecture artifacts during the following phases of the lifecycle?
  5. How many additions to the architecture artifacts are identified that account for functionality missed during the design phase?
  6. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
Then we are move into the construction phase of the life cycle.  Once again, the team bulks up and this is where we lay down the pieces that come together and satisfy the original vision.  Traceability becomes an important factor across multiple disciplines in this phase.  I like to include regression testing in this phase as my teams are moving to an automated nightly build/test model.  The Project Office may want to look at the following metrics after the phase has finished:
  1. What is the % of Requirements that can’t be traced through to the unit tests?
  2. How many defects are found during integration testing?  Are they design or build related?  
  3. How many defects are found during quality assurance testing?  Are they design or build related?
  4. How many defects are found during user acceptance testing?  Are they design or build related?
  5. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
We're almost there folks!  We now move into the formal testing phase - formal quality assurance, user acceptance tests, load testing, parallel testing.  This is all the stuff we need to do to make sure the product is ready for prime time.  The Project Office will look at this activity and want to track a few more metrics once this phase has been completed:
  1. How many defects are found in the production environment?  Are they design or build related - or are they related to the implementation?
  2. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
If you review the metrics identified at each stage of the lifecycle, you'll see that most of these metrics assess issues that can impact the final product delivered to the customer.  The one consistent measurement across all phases that the Project Office will want to know about - but is not necessarily tied to the user experience is: "Did they miss the target milestone ...".  This is more about keeping accountability within the overall team to ensure that things stay moving.  And the most important metric won't be known until after the project is in production - are the customers accepting the product, is the company experiencing the revenue increase expected or the efficiencies that were planned.

Tags: Project Management; Software Development Lifecycle; Teams; SDLC; Change Management; Portfolio Management; Project Portfolio; Project Metrics; Metrics;


For more information on David L. Collison: LinkedIn Profile

No comments:

Post a Comment