Showing posts with label Portfolio Management. Show all posts
Showing posts with label Portfolio Management. Show all posts

Wednesday, December 2, 2015

Agile? Waterfall? - Pick Carefully or Pay the Price!

Managing projects across an enterprise is an art as well as a science. There are several well known paradigms used by organizations across the world - waterfall to agile with variations in-between. I have always told those that have asked that there is not one silver bullet to magically solve the project management puzzle. In fact, my recommendations to people have always been to create a project management paradigm that fits the culture and needs of the organization.

My teams work in an industry that is heavily regulated by state and federal agencies and also must conform to several industry governing entities that set standards within the payments space. Many of the projects that we work on are implementing standard interfaces between the various payment networks, acquirer, issuers and other players within the financial industry space. Implementing these interfaces is a technical exercise with very little to no user interface. You either get the spec right or the money doesn't move. These initiatives really are not setup to be run utilizing agile techniques.

Now, let me shift over to some of the other initiatives we run within the organization. Our mobile and internet applications. As you can probably guess, these applications have significant user interfaces. These project efforts can utilize and benefit from agile techniques that can give our 'user representatives' much quicker access to potential solutions as well as drive increased cooperation between the development, user and quality assurance teams. This is a place where our teams are experimenting with agile techniques as a way to accelerate delivery through the development channel.

One organization utilizing different project management paradigms based on the unique needs of the initiative!

I'm running at this from two different directions at the same time:
  1. I'm working with our project management organization to review our overall defined lifecycle to recommend changes that will allow projects that might benefit from being run utilizing agile techniques to take advantage of a slimmed down process.  We still need to figure out how to satisfy the documentation requirements of the various state/federal agencies and industry governing bodies, but overall, they are supportive of our desire to create multiple lanes that can be managed differently based on the overall risk of the effort.
  2. I'm encouraging our project managers and team members to challenge the process. When working any individual initiative within the overall portfolio to make recommendations to skip various project artifacts, project steps or entire slices of the overall project lifecycle. The key is to document the decision as to why decisions are being made to skip certain pieces of the lifecycle based on risk and impacts to the overall effort.
This is what works in the enterprise that I currently work for. 

On a regular basis, I have to sit across the table from auditors and prove to them that all of the appropriate documentation has been generated. That there is traceability though the project effort from requirements, through design and testing. I have to show them our implementation and fallback plans as well as prove that those plans were followed during actual implementations. Our auditors will randomly pick projects and then go through all of the documentation and measure it against the documented lifecycle to ensure that it's all there. If something is missing, there has to be proof - either through project meeting notes or via change requests - that show the decision was made to skip the document/project step(s). This proof also needs to discuss the risk to the product, the enterprise, our financial institutions, acquirers and/or processors. It is the goal of our project management organization to manage these risk issues and ensure that we are safeguarding our ability to process payment transactions and move money within the payments space.

In past lives with other organizations I've run the gamut from running lifecycles that are even more formal than the one my teams currently use to running very informal lifecycles similar to the agile paradigm. The key is understanding the culture of the organization, the risk tolerance within the organization and the  time to market pressures within the industry.  As leaders within your organization, it's your job to discuss these issues and build processes that match the particular needs of your organization, manage the risk to the organization and your customers and build a project lifecycle paradigm that can deliver.

If you'd like more information on my background: LinkedIn Profile

Monday, August 11, 2014

Using Metrics Across the Development Lifecycle

We’ve discussed metrics before, but I think it’s time to revisit the topic.  Different members of the team are going to want access to see different sets of metrics at different points in the cycle.  I think it’s critical for organizations to understand what metrics are easily available – or what can become available with a small amount of work.  Then you need to understand how to use the metrics to drive the behavior you want within the organization.  Metrics used the wrong way can create bad behavior.

I’ve known organizations in the past that rate their development team members based on the overall number of lines of code that they create.  I’ve attempted to explain to individuals that this encourages developers to focus on the wrong thing – how fast they can key in chunks of code without any care as to the number of bugs they are introducing into the application.  This may also encourage the development team to create chunks of code that are not optimized.

You may not think this is a big deal.  But I’ve seen real instances where a company has a transaction loop that is measured in milliseconds, just to have their transaction time crater because some joker hasn’t optimized their code.  Instead of being able to handle a 1000 transactions a second, they find out they can handle less than 10 transactions a second.  Yeah the guy got the code done ahead of schedule, but at the cost of preventing the system from processing transactions.  Don’t worry.  The change never saw the light of day.

I’ve also seen organizations that award bonuses to team members by how early they can deliver the functionality – a sliding bonus based on how early the product is delivered to production, the earlier you deliver, the bigger the bonus.  This shifts the focus to how fast the system can be assembled vs how well the system matches up to the original requirements or how many bugs found their way through to production.  Just a tad short sighted if you ask me.

Tracking metrics is a fine line.  You want to create a set of metrics that allows you to understand what is happening within your lifecycle, but you need to identify and balance the metrics to get the real behavior you want from the team.  Ultimately, you need to produce a product or an enhancement to an existing product from the project that is as free of bugs as possible and that delights the customer.  If you do your job right, you’ll deliver more than the customer was expecting (under promise and over deliver).  Oh, and you'll also produce a product that drives additional revenue to the organization.

NOTE: The metrics identified below are things that I’ve seen work across the various organizations I’ve been involved with over the years.  These may or may not work for your organization.  It is important to look at the culture of your individual organization and identify what you value, what needs to be measured and how that information will ultimately be used.  Do I use all of these in the role that I currently play – no.  Some of them just don’t make sense for the way that we manage our projects and others won’t work because some of our processes haven’t fully matured.

I've seen organizations use multiple specialized roles to get through the first couple of phases of the life cycle - variously titled: Product Managers, Product Owners, Business Analysts, Process Engineers, Product Engineers ... the list could go on and on.  That said, all of the roles have the responsibility to reflect the needs of the project sponsor.

Through the scoping/discovery phase of a project, the team should be fully focused on eliciting the needs and expectations of the Sponsor.  This allows you to draw the box around what needs to be delivered and to begin to identify what will be called success.  All that said, how do you measure the success of these individuals - what metrics make sense?

The Project Office may consider the project a success if it is delivered on-time and on-budget.  However, the Sponsors perspective is more concerned about the impact that the project has post implementation.  They would not have authorized the project to move forward unless they were expecting efficiency improvements, cost reductions or revenue increases.  Ultimately, the project was approved for some underlying reason.  It was not approved to provide busy work to a bunch of people across the organization.

From the Project Office perspective, the key metrics that they will watch - again remember they are concerned about time, costs and resources - occurs during the overall life cycle of the project.
  1. What is the % of Project Change Requests against the identified scope that do not alter the overall boundaries of the project – minor misses?
  2. What is the % of Project Change Requests that increase the original scope of the project enlarging the overall boundaries of the project – major functional or non-functional pieces that were missed – major misses?
  3. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
From the Project Sponsor perspective, the key metrics that they will watch will happen after the project is considered complete and has been moved in to production.
  1. Did the company achieve the expected efficiency improvements, costs savings or revenue increases?  Does the project meet the objectives identified and agreed to within the business case?
  2. Did the company hit the objectives for the total number of clients adopting the solution?
NOTE: Take a moment to reflect on the different viewpoints – the Project Office may consider the project a success due to the fact that it was delivered on-time and on-budget.  However, six months to a year later, the Project Sponsor may declare that the project was a failure because the expected increase in revenue didn’t appear.  What processes do you have in place to support the different needs of the Project Sponsor vs. the Project Office?  What feedback do you give to your Product Owner? 

As the project moves through the formal requirements definition phase, responsibilities shift and additional resources are brought into the team.  Additional artifacts are generated – the business case, requirements, business impacts, security assessments, risks and mitigation plans.  Quite a bit is going on in this phase and this really begins to put the underlying structure in place that will give shape to the product that is being built or enhanced.

The Project Office will have primary interest in the success of this phase and may want to look at measuring success of the team when the phase is complete based on the following metrics:
  1. What is the % of Project Change Requests against identified requirements – minor misses?
  2. What is the % of Project Change Requests that introduce new requirements – major misses?
  3. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
There are multiple team members defining requirements – one individual should be accountable for the overall set of requirements, but accountability should occur across those responsible for contributing to the set of requirements impacting their slice of the organization.  You may want to track down to the contributors so that you can assess weakness across projects and across the organization.  For example, you may see a pattern emerge across several projects that would indicate that requirements associated with the sales team are consistently seeing change requests – you then have something to act on.

As the project moves downstream we begin to do the formal design work – this is where vision hits reality.  The team gets larger and begins to focus on breaking down requirements or combining requirements to understand what is being requested and translating that into technical terms that define what will actually be delivered.  If the Project Sponsor wants a hard dose of reality – this is usually the place and time for that cold water to be thrown in their face.  Sometimes what is wanted can’t be delivered – or at least can’t be delivered without significant pain.

All that said, the design is laid down and the Project Office needs to measure the success of the design using the following metrics when activity within the phase is completed:
  1. What is the % of Requirements that can’t be traced through to the Design – major misses?
  2. How many defects are identified in production that can be traced back to Architectural Design?
  3. How many defects are identified in testing that can be traced back to Architectural Design?
  4. How many changes are identified within the architecture artifacts during the following phases of the lifecycle?
  5. How many additions to the architecture artifacts are identified that account for functionality missed during the design phase?
  6. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
Then we are move into the construction phase of the life cycle.  Once again, the team bulks up and this is where we lay down the pieces that come together and satisfy the original vision.  Traceability becomes an important factor across multiple disciplines in this phase.  I like to include regression testing in this phase as my teams are moving to an automated nightly build/test model.  The Project Office may want to look at the following metrics after the phase has finished:
  1. What is the % of Requirements that can’t be traced through to the unit tests?
  2. How many defects are found during integration testing?  Are they design or build related?  
  3. How many defects are found during quality assurance testing?  Are they design or build related?
  4. How many defects are found during user acceptance testing?  Are they design or build related?
  5. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
We're almost there folks!  We now move into the formal testing phase - formal quality assurance, user acceptance tests, load testing, parallel testing.  This is all the stuff we need to do to make sure the product is ready for prime time.  The Project Office will look at this activity and want to track a few more metrics once this phase has been completed:
  1. How many defects are found in the production environment?  Are they design or build related - or are they related to the implementation?
  2. Did they miss the target milestone to complete all activity associated with this phase of the lifecycle?
If you review the metrics identified at each stage of the lifecycle, you'll see that most of these metrics assess issues that can impact the final product delivered to the customer.  The one consistent measurement across all phases that the Project Office will want to know about - but is not necessarily tied to the user experience is: "Did they miss the target milestone ...".  This is more about keeping accountability within the overall team to ensure that things stay moving.  And the most important metric won't be known until after the project is in production - are the customers accepting the product, is the company experiencing the revenue increase expected or the efficiencies that were planned.

Tags: Project Management; Software Development Lifecycle; Teams; SDLC; Change Management; Portfolio Management; Project Portfolio; Project Metrics; Metrics;


For more information on David L. Collison: LinkedIn Profile