Wednesday, May 15, 2013

Metrics - How I Use Today

Metrics are a funny thing - what works for one team may not work for another team.  In practice, I have always found that I need to adjust the metrics that I track based on the organization.  I find it humorous when some vendor walks in the door and tells me that if I just revamp my process to use their custom SDLC solution and manage by their key indicators that all of my problems will be solved.  Dream on!  If I’ve learned one thing, it is the fact that I must be sensitive to the unique culture of the organization and create a process and metrics that work within the overall organization.

So, that said, I’m going to walk through some of the metrics and processes that my teams currently utilize and how these get used to strengthen the overall lifecycle.  You may find some of this useful, or you may decide that the whole thing is nonsense.  That’s up to you - what I can say is it works for the teams that I manage and is trusted by the management team to give insight into our overall portfolio.  This is still somewhat high level, but it will give you the general flavor of the things that I consider important.

One of the first things that I did walking in the door was to have the entire development team begin tracking what time was being spent on what activity.  There were multiple reasons for initiating this type of tracking:
  1. It allowed each individual developer and their manager to identify where they were spending their time.  It also initiated conversations on why people were spending time on certain activities.  Over a period of several months, we collectively managed to increase time spent on corporate approved projects from 35% of the overall time to 65% of the overall time.  We continue to look at this metric to ensure that people are spending time on approved project activity.
  2. It allows me to understand how many hours people are working and to look at when I need to increase our headcount.  Over the last six years, I have used this data, along with other metrics information, to initiate the discussion on headcount increases and win those discussions.
Time tracking allows me to identify several metric points that provide insight into our overall lifecycle, as well as down to individual resource management:
  1. Overall time spent on a given project across all resources.  How did it compare to our baseline estimates?
  2. Time spent on activities within the project vs the baseline projections.
  3. Are our estimates accurate?
  4. Did we identify development activity that was missed during the design stages?
  5. Individual time analysis - approved project activity vs production maintenance vs defect management vs administrative overhead.
  6. Team analysis - approved project activity vs production maintenance vs defect management vs administrative overhead.
This has proven so successful within my team, that the entire IT Department now tracks their time and all individuals within the Business Teams track their time against project activity.

While time trackings helps at an individual project level and at a resource level - it is responsive and does not allow you to project out over a period of time.  For that you need a view into your entire portfolio.  To that end, we have a Portfolio Plan that is reviewed on a weekly basis by myself and my peers across the organization.  We do this to identify bottlenecks in the process, find ways to breakdown barriers and push the projects forward.  Here are the key metrics that we review:
  1. Each project within our Portfolio Plan is broken into 6 distinct phases - Discovery, Requirements, Technical Design, Development, Testing and Launch.
  2. Each phase has two key dates - Start and Finish.
  3. Each phase has resource allocations assigned for each team.
  4. At the end of each phase we adjust our timelines and allocations for the following phases based on what we’ve learned to date within the project.
  5. Each project in the Portfolio Plan is tied to the associated “real” project housed in our MS-Project Server.
  6. Each “real” project has standard milestones that identify the initiation of each phase and the completion of each phase.
  7. Each day - routines are run that allow us to provide analysis of the Portfolio Plan vs what is happening within the “real” projects.
  8. Periodically, I go in to relevel the entire Portfolio Plan to ensure that we are not over-allocating any one team and that we have enough work moving through the pipeline.
Why do we do this?  Simple, it helps us control the activity in the organization today and allows us to review the allocations of the various teams and ensure that we are levelling project activity over a period of time.  Most importantly, it forces us to prioritize what is moving through the pipeline.
  1. I can tell when activity within any phase is going to be completed ahead of schedule, on schedule or is behind schedule.
  2. I can use the above information to tell me when I need to reallocate resources so that I can hit internal target dates for critical projects.
  3. I can use this information during budgeting to review with Senior Management and discuss our pipeline to see if they are ok with the volume of activity we are managing or if we need to increase our pipeline.
  4. I can use this information to hold individuals and teams accountable.
  5. I can project out when we will be trying to force through too much activity within the overall pipeline and use that information to initiate conversations about the priority of project activity.
  6. I can use this information in conjunction with the time tracking data to identify when additional resources might be useful.
So, removing ourselves from the project level and going down to where the rubber meets the road, let’s look at some of the metrics that we are tracking within the development team as it relates to coding:
  1. We have begun implementing nightly builds/nightly regression tests across all of the development teams.  Some teams are further along in the process than others, but all are moving towards the same overall goals.
  2. Jenkins is used as the tool that automates all builds for all packages across all environments.
  3. Jenkins is also used to gather results from the build process and the regression test process.
  4. Jenkins is used to distribute emails to team members and management members identifying the results of the build/regression test process.  Developers are then responsible for going in and fixing the issues that have been identified.
  5. Sonar is a tool that allows us to view the results of the build/test processes in a web application - broken out by each application.
    1. Line Coverage - how much of our code has been validated with test cases
    2. Unit Test Success - how many of our tests completed successfully, how many failed.
    3. Violation of Rules Compliance - how many instances there are where code is not conforming to coding rules - both those identified as industry best practices and those that we have defined internally
    4. Severity off the violation.
    5. Code Reviews - I can tell if the code has gone through our review process.
    6. Other metrics are provided - but I don’t consider them as useful as those identified above.
All of the above gives me insight into the overall confidence of the package before we move to production.  I can review this anytime I want and see our progress in knocking down the number of bugs being found by our standard unit and regression tests.  I can see if we are improving the overall amount of code that is covered by our tests.  I can see if we are reducing the overall number of violations that we are finding within any given application.

Moving in to the formal test cycle, I then begin to track a different set of metrics:
  1. How many defects were found by QA that were not identified by the Development Team through unit, integration or regression testing?
  2. How many new package builds occur before QA has a clean package ready for production?
Each week we notify the entire management team of changes impacting the 6 phases of our project lifecycle.  Those projects where dates have shifted for any given phase are identified along with explanations - a miss by the team, extended due to resource constraints, extended because we missed something, reduced because it was easier than we anticipated.  Whatever the reason, it is noted and everyone has visibility.

On a bi-monthly basis we meet with Senior Management to discuss the overall portfolio.  The slide in timelines - positive and negative - that we are seeing across all projects.  Projects impacted by levelling activities.  Reviewing the priority of projects so that we understand where to focus time when the pipeline is packed full.  Understanding shifts in strategy that we need to accommodate within the overall pipeline.

At the end of our projects, we provide summary information to our Senior Management Team that compare the total number of hours spent within the various teams vs the baseline estimates.  We report on the total $ expenses vs the baselined estimates.

Again, this is somewhat high level, but it gives you a flavor for some of the things that I consider useful as my teams move projects through the lifecycle.  Most of this was accomplished using tools we already had in place - but that we weren’t taking full advantage of:
  • MS-Project
  • MS-Project Server
    • PWA
    • Reporting Analytics
  • Jenkins (Build / Test Automation)
  • Sonar (Consolidated Build Reporting)
Yes, there are other metrics that we track and discuss.  This has all been done with an eye on improving individual and team productivity, reducing the number of errors moving through development and into QA (ultimately into production), holding individuals and teams accountable and ensuring that we are prioritizing activity within the organization to align with the overall strategy communicated from Senior Management.  I’m focused on those metrics that can provide significant results for the organization.  I’m not interested in adding metrics to produce pretty charts.

None of these metrics are full-proof and each must be taken at face value.  Sometimes the metrics lie and that’s where it is the job of us as individuals to interpret and validate the results.

Tags: SDLC, Software Development, Metrics, Lifecycle, Software, Application Development, Project Office
If you'd like more information on my background: LinkedIn Profile

No comments:

Post a Comment